The U.S. and China

27Schmidt2-superJumbo

Much ink has been spilled regarding the relationship between the U.S. Federal Government and the Technology Industry. Some of it has been shrill.

That is why I gravitated to a recent op-ed written by former Google CEO Eric Schmidt. The title of the piece, “Silicon Valley Needs the Government” reveals a lot.

Dr. Schmidt has street creds few others possess. He is the chairman of the National Security Commission on Artificial Intelligence and the Defense Innovation Board.

Here is his preamble to his article: “I Used to Run Google. Silicon Valley Could Lose to China. We can’t win the technology wars without the federal government’s help.” Then he continues:

America’s companies and universities innovate like no other places on earth. We are garage start-ups, risk-taking entrepreneurs and intrepid scholars exploring new advances in science and technology. But that is only part of the story.

Many of Silicon Valley’s leaders got their start with grants from the federal government — including me. My graduate work in computer science in the 1970s and ’80s was funded in part by the National Science Foundation and the Defense Advanced Research Projects Agency.

But in recent years, Americans — Silicon Valley leaders included — have put too much faith in the private sector to ensure U.S. global leadership in new technology. Now we are in a technology competition with China that has profound ramifications for our economy and defense — a reality I have come to appreciate as chairman of two government panels on innovation and national security. The government needs to get back in the game in a serious way.

Important trends are not in our favor. America’s lead in artificial intelligence, for example, is precarious. A.I. will open new frontiers in everything from biotechnology to banking, and it is also a Defense Department priority. Leading the world in A.I. is essential to growing our economy and protecting our security. A recent study considering more than 100 metrics finds that the United States is well ahead of China today but will fall behind in five to 10 years. China also has almost twice as many supercomputers and about 15 times as many deployed 5G base stations as the United States. If current trends continue, China’s overall investments in research and development are expected to surpass those of the United States within 10 years, around the same time its economy is projected to become larger than ours.

Unless these trends change, in the 2030s we will be competing with a country that has a bigger economy, more research and development investments, better research, wider deployment of new technologies and stronger computing infrastructure.

You can read the full piece here

Tech Rising?

im-141325

Last month, I blogged about technology and featured an article that asked the question, “Has Technology Peaked?

Piling on to that post, here are the most recent (February 11, 2020) from the Wall Street Journal of those companies with over a one trillion dollar valuation:

  • Microsoft: $1.44 trillion
  • Apple: $1.41 trillion
  • Amazon: $1.06 trillion
  • Alphabet: $1.04 trillion

Oh, and Facebook is the next-most highly valued company at $607 billion.

Has technology peaked? What do you think?

Has Technology Peaked?

im-141325

A great deal of ink has been spilled trying to guess where technology is moving in the future. After decades of spectacular advances, many think this progress have peaked.

I don’t think they have, and those thoughts were supported in a recent article by Andy Kessler who follows technology for the Wall Street Journal. Here is how he begins:

Does history rhyme? A century ago, the ’20s boomed, driven by consumer spending on homes, cars, radios and newfangled appliances like refrigerators, sewing machines and vacuum cleaners. Most Americans couldn’t afford the upfront cost of a lot of these goods, so manufacturers and retailers invented installment plans. Debt ruled as 75% of cars, furniture and washing machines were bought on credit.

So what’s next? My fundamental rule for finding growth trends is that you need to see viable technologies today, and then predict which ones will get cheaper and better over time. Microprocessors, storage, bandwidth—all still going strong after half a century.

The fundamental building block of the 2020s will be artificial intelligence, particularly machine learning. Better to ask what it won’t change this decade. The artificial neural networks that made face and voice recognition viable were the low-hanging fruit. Now chips tailor-made for machine learning are increasing speed and cutting costs. A recent Stanford report suggests that the power behind the most advanced AI computations is doubling nearly every three months, outpacing Moore’s Law by a factor of six.

Want more? You can read the full article here

Tech Idols?

10Bradbury-superJumbo

Who do we look up to? Movie stars? Maybe? Sports figures? Sure?

But when we think about it, those people seen different, not like us, possessing special skills.

How about technology industry leaders? Aren’t they just average Joes who were tinkering around in their garages and got lucky?

We can identify with them, so we tend to make them, so we make them our idols.

But that is changing. That’s why I was drawn to a piece, “Twilight of the Tech Idols.” Here is how it begins:

The banking industry, which has consistently been one of the wealthiest industries for the last few centuries, has very few leaders one would call “heroes” or “idols.” Most of them are part of a group of men who fought and finessed their way to the top by being good at corporate politics and managing other bankers.

Silicon Valley, in stark contrast, was built on the myth of the visionary heroic geek. A succession of Tech Heroes — from Steve Jobs at Apple and Bill Gates at Microsoft through Larry Page and Sergey Brin at Google to Mark Zuckerberg at Facebook — embodied the American dream. They were regular guys and middle-class youngsters (several of them from immigrant families), whose new technology changed the world and made them extremely wealthy.

The Tech Heroes also made for fabulous media stories. As their businesses grew, they got breathless press coverage as they promised to “disrupt” one industry or another. It nearly got to the point where if a Google founder sneezed, an article could quickly follow: “Will Google Reinvent the Sneeze?” Critics warned of troubles and monopolies ahead, but their voices were outnumbered and drowned out by the cheerleaders.

Want more? You can read the rest of the piece here

Rolling the Dice on AI

07Wu-superJumbo

There are no bombs falling on our cities, but America is at war. And the battlespace is artificial intelligence.

Our peer adversaries get this and are investing hundreds of billions of dollars to dominate the world of AI – and yes – dominate the world.

Sadly, our approach to winning this war is to let someone else – in this case, Silicon Valley – worry about it.

Tim Wu nailed it in his piece, “America’s Risky Approach to AI.” Here’s how he begins:

The brilliant 2014 science fiction novel “The Three-Body Problem,” by the Chinese writer Liu Cixin, depicts the fate of civilizations as almost entirely dependent on winning grand races to scientific milestones. Someone in China’s leadership must have read that book, for Beijing has made winning the race to artificial intelligence a national obsession, devoting billions of dollars to the cause and setting 2030 as the target year for world dominance. Not to be outdone, President Vladimir Putin of Russia recently declared that whoever masters A.I. “will become the ruler of the world.”

To be sure, the bold promises made by A.I.’s true believers can seem excessive; today’s A.I. technologies are useful only in narrow situations. But if there is even a slim chance that the race to build stronger A.I. will determine the future of the world — and that does appear to be at least a possibility — the United States and the rest of the West are taking a surprisingly lackadaisical and alarmingly risky approach to the technology.

The plan seems to be for the American tech industry, which makes most of its money in advertising and selling personal gadgets, to serve as champions of the West. Those businesses, it is hoped, will research, develop and disseminate the most important basic technologies of the future. Companies like Google, Apple and Microsoft are formidable entities, with great talent and resources that approximate those of small countries. But they don’t have the resources of large countries, nor do they have incentives that fully align with the public interest.

To exaggerate slightly: If this were 1957, we might as well be hoping that the commercial airlines would take us to the moon.

If the race for powerful A.I. is indeed a race among civilizations for control of the future, the United States and European nations should be spending at least 50 times the amount they do on public funding of basic A.I. research. Their model should be the research that led to the internet, funded by the Advanced Research Projects Agency, created by the Eisenhower administration and arguably the most successful publicly funded science project in American history.

You can read the full article here

Tech and Defense

merlin_141966030_29e7410c-6c07-48ba-b460-5a9da251aeb1-articleLarge

I served in the U.S. military at a time when we were in a technological arms race with the Soviet Union. Back then, the Department of Defense was THE leader of technological development.

That is no longer the case. It is now widely recognized that large technology companies—represented most prominently by the so-called “FAANG Five” (Facebook, Apple, Amazon, Netflix and Alphabet’s Google)—are dominating the development of technology.

I work in a U.S. Navy laboratory where we work to harness these kind of technologies to put better tools in the hands of America’s service men and women. We recognize that it is not just hardware – planes, ships, tanks and the like – that will give our warfighters the edge – but the same kind of technologies – the software – that FAANG companies and others like them develop.

To understand where we are today and fashion a way ahead, it is worth looking at where we were “back in the day” when the Department of Defense led technology development.

That is why I was drawn to read a review of a recent book: THE CODE
Silicon Valley and the Remaking of America. Here is how the review begins:

By the early 1970s, Don Hoefler, a writer for Electronic News, was spending after-hours at his “field office” — a faux-Western tavern known as Walker’s Wagon Wheel, in Mountain View, Calif. In a town with few nightspots, this was the bar of choice for engineers from the growing number of electronic and semiconductor chip firms clustered nearby.

Hoefler had a knack for slogans, having worked as a corporate publicist. In a piece published in 1971, he christened the region — better known for its prune orchards, bland buildings and cookie-cutter subdivisions — “Silicon Valley.” The name stuck, Hoefler became a legend and the region became a metonym for the entire tech sector. Today its five largest companies have a market valuation greater than the economy of the United Kingdom.

How an otherwise unexceptional swath of suburbia came to rule the world is the central question animating “The Code,” Margaret O’Mara’s accessible yet sophisticated chronicle of Silicon Valley. An academic historian blessed with a journalist’s prose, O’Mara focuses less on the actual technology than on the people and policies that ensured its success.

She digs deep into the region’s past, highlighting the critical role of Stanford University. In the immediate postwar era, Fred Terman, an electrical engineer who became Stanford’s provost, remade the school in his own image. He elevated science and engineering disciplines, enabling the university to capture federal defense dollars that helped to fuel the Cold War.

Want more? You can read the full review here

Cyber-War

merlin_139802226_c506512f-1429-494f-9502-a1f89f253517-jumbo

One of the most cutting-edge military technologies is generally called “cyber.” Most people struggle with this concept and with what “cyber-warfare” actually means.

That’s why I was intrigued by a recent book review of David Sanger’s book: “THE PERFECT WEAPON: War, Sabotage, and Fear in the Cyber Age.” Here’s how the reviewer begins:

New technologies of destruction have appeared throughout history, from the trireme and gunpowder in past centuries to biological and nuclear weapons in more modern times. Each technology goes through a cycle of development and weaponization, followed only later by the formulation of doctrine and occasionally by efforts to control the weapon’s use. The newest technological means of mayhem are cyber, meaning anything involving the electronic transmission of ones and zeros. The development of cyber capabilities has been rapid and is continuing; doctrine is largely yet to be written; and ideas about control are only beginning to emerge.

David E. Sanger’s “The Perfect Weapon” is an encyclopedic account of policy-relevant happenings in the cyberworld. Sanger, a national security correspondent for The New York Times, stays firmly grounded in real events, including communication systems getting hacked and servers being disabled. He avoids the tendency, all too common in futuristic discussions of cyber issues, to spin out elaborate and scary hypothetical scenarios. The book flows from reporting for The Times by Sanger and his colleagues, who have had access, and volunteer informants, that lesser publications rarely enjoy. The text frequently shifts to the first-person singular, along with excerpts from interviews Sanger has had with officials up to and including the president of the United States.

The principal focus of the book is cyberwarfare — the use of techniques to sabotage the electronic or physical assets of an adversary — but its scope extends as well to other controversies that flow from advances in information technology. Sanger touches on privacy issues related to the collection of signals intelligence — a business that has been around since before Franklin Roosevelt’s secretary of war, Henry Stimson, talked about gentlemen not reading each other’s mail. He also addresses social media and the problems of misuse that have bedeviled Facebook, including usage by foreign governments for political purposes. These other topics are to some extent a digression from the main topic of cyberwarfare. Intelligence collection and electronic sabotage are different phenomena, which in the United States involve very different legal principles and policy procedures. But Sanger takes note of such differences, and the book’s inclusiveness makes it useful as a one-stop reference for citizens who want to think intelligently about all issues of public policy having a cyber dimension.

You can read the full review here

Trusting AI

06MarcusDavis-superJumbo

Few technologies inspire more controversy than artificial intelligence. Some hail it as a savior, others predict it will spell our doom.

That’s why I was drawn to a recent op-ed, “Build AI we can trust.” Here’s how the two writers begin:

Artificial intelligence has a trust problem. We are relying on A.I. more and more, but it hasn’t yet earned our confidence.

Tesla cars driving in Autopilot mode, for example, have a troubling history of crashing into stopped vehicles. Amazon’s facial recognition system works great much of the time, but when asked to compare the faces of all 535 members of Congress with 25,000 public arrest photos, it found 28 matches, when in reality there were none. A computer program designed to vet job applicants for Amazon was discovered to systematically discriminate against women. Every month new weaknesses in A.I. are uncovered.

The problem is not that today’s A.I. needs to get better at what it does. The problem is that today’s A.I. needs to try to do something completely different.

In particular, we need to stop building computer systems that merely get better and better at detecting statistical patterns in data sets — often using an approach known as deep learning — and start building computer systems that from the moment of their assembly innately grasp three basic concepts: time, space and causality.

Today’s A.I. systems know surprisingly little about any of these concepts. Take the idea of time. We recently searched on Google for “Did George Washington own a computer?” — a query whose answer requires relating two basic facts (when Washington lived, when the computer was invented) in a single temporal framework. None of Google’s first 10 search results gave the correct answer. The results didn’t even really address the question. The highest-ranked link was to a news story in The Guardian about a computerized portrait of Martha Washington as she might have looked as a young woman.

Check out this link to read more

Leading Technology

im-103755

Much ink has been spilled regarding the challenges the United States faces in our military technology race with potential adversaries like China and Russia.

One of the best analysts regarding this issue is Mackenzie Eaglen. Here is part of what she said in a receipt op-ed:

In the global arms race, a moment’s hesitation is enough to lose your lead. The Pentagon pioneered research 15 years ago into hypersonic missiles that can cruise at Mach 5. The U.S. then chose not to develop the technology—but China and Russia developed it. Now Beijing and Moscow have hypersonics at the ready and, according to Pentagon research chief Michael D. Griffin, no number of current U.S. ships or ground-based antimissile systems would be enough to counter a massive attack.

The problem stems in part from the Pentagon’s increasing dependence on outside firms. For decades after World War II, the Defense Department was a producer of cutting-edge research and technology, but today it contracts more and more out to Silicon Valley. No longer setting its own course for development, the Pentagon is unable to take the major leaps that once kept U.S. military technology racing ahead.

The Pentagon still acquires its systems in accordance with decades-old protocols that value compliance over nimbleness and usefulness. It has doubled down on unreasonable demands to own intellectual property in perpetuity, a nonstarter for many software companies with which it contracts. Now defense leaders are stuck having to sort out which software systems might pose a security risk because the developers often also sell to America’s rivals.

This shift from calling the shots to negotiating with ever-more-private interests is new for the defense bureaucracy. For generations, influence flowed in the other direction. The buildup in defense research-and-development spending that began in the late 1940s and continued through the ’80s was responsible for propelling many of the tech breakthroughs of the past century: cellphones, jet engines, integrated circuits, weather satellites and the Global Positioning System. A recent example is Apple ’s Siri artificial-intelligence system, which it purchased from the Defense Advanced Research Projects Agency.

You can read the full article here

Harnessing Technology to Make War Safer

28kunce-superJumbo

Few subjects inspire more furious debate than the terms “war” and “drones.” There is vastly more heat than light on this subject.

That is why I was impressed by the reasoned arguments of a U.S. Marine who wrote an article entitled, “How Tech Can Make War Safer.”

Amidst all the shrill debate on the subject, Lucas Kunce explained how when the tech industry refuses to work on defense-related projects, war becomes less safe Here’s how he begins:

Last year, more than 4,600 Google employees signed a petition urging the company to commit to refusing to build weapons technology. A response to Google’s work with the military on an artificial intelligence-based targeting system, the petition made a powerful and seemingly simple moral statement: “We believe that Google should not be in the business of war.” Similarly, Microsoft employees in February demanded that their company withhold its augmented reality HoloLens headset technology from the Army, saying they did not want to become “war profiteers.”

As a Marine who has been in harm’s way a few times, I am glad that my peers in the tech industry have initiated this discussion. America is long overdue for a conversation about how we engage in war and peace; the difference between the decision to go to war and decisions about what happens on the battlefield during warfare; and what it means to fight, die and kill for our country.

My job has put me in places where I have witnessed and taken part in significant battlefield decisions. From my experience, I have learned that working with the military to develop systems would actually support the tech workers’ goal to reduce harm in warfare. (I need to note here that I am speaking for myself, and my views do not necessarily reflect those of the Department of Defense.)

Tech workers might not realize that their opposition to the work their companies do on military technology does not change the decision-making of the American leaders who choose to go to war, and therefore is unlikely to prevent any harm caused by war. Instead, it has the unintended effect of imperiling not only the lives of service members, but also the lives of innocent civilians whom I believe these workers want to protect.

You can read the full article here