Dedication to a Cause

01hybrid-superJumbo

Much ink has been spilled about the future of robots and how they will either help – or hurt – humanity. Some still fear HAL from 2001 A Space Odyssey.

That is why I was drawn to a recent piece, “A Case for Cooperation Between Machines and Humans.” The subtitle is revealing: “A computer scientist argues that the quest for fully automated robots is misguided, perhaps even dangerous. His decades of warnings are gaining more attention.” Here is how it begins:

The Tesla chief Elon Musk and other big-name Silicon Valley executives have long promised a car that can do all the driving without human assistance.

But Ben Shneiderman, a University of Maryland computer scientist who has for decades warned against blindly automating tasks with computers, thinks fully automated cars and the tech industry’s vision for a robotic future is misguided. Even dangerous. Robots should collaborate with humans, he believes, rather than replace them.

Late last year, Dr. Shneiderman embarked on a crusade to convince the artificial intelligence world that it is heading in the wrong direction. In February, he confronted organizers of an industry conference on “Assured Autonomy” in Phoenix, telling them that even the title of their conference was wrong. Instead of trying to create autonomous robots, he said, designers should focus on a new mantra, designing computerized machines that are “reliable, safe and trustworthy.”

There should be the equivalent of a flight data recorder for every robot, Dr. Shneiderman argued.

It is a warning that’s likely to gain more urgency when the world’s economies eventually emerge from the devastation of the coronavirus pandemic and millions who have lost their jobs try to return to work. A growing number of them will find they are competing with or working side by side with machines.

Want more? You can read the full article here

Big Tech and National Security

00schmidt1-articleLarge

There is little question that the United States faces two powerful peer competitors, China and Russia.

There is also no question that we cannot match these powers soldier for soldier or tank for tank. The only way we are likely to prevail is through technological innovation.

The big tech companies – not the traditional defense industry giants – are the ones who can help us achieve that goal. One person, former Google CEO Eric Schmidt, is leading this effort.

A revealing article in the New York Times entitled, “‘I Could Solve Most of Your Problems’: Eric Schmidt’s Pentagon Offensive,” begins this way:

In July 2016, Raymond Thomas, a four-star general and head of the U.S. Special Operations Command, hosted a guest: Eric Schmidt, the chairman of Google.

General Thomas, who served in the 1991 gulf war and deployed many times to Afghanistan, spent the better part of a day showing Mr. Schmidt around Special Operations Command’s headquarters in Tampa, Fla. They scrutinized prototypes for a robotic exoskeleton suit and joined operational briefings, which Mr. Schmidt wanted to learn more about because he had recently begun advising the military on technology.

After the visit, as they rode in a Chevy Suburban toward an airport, the conversation turned to a form of artificial intelligence.

“You absolutely suck at machine learning,” Mr. Schmidt told General Thomas, the officer recalled. “If I got under your tent for a day, I could solve most of your problems.” General Thomas said he was so offended that he wanted to throw Mr. Schmidt out of the car, but refrained.

Four years later, Mr. Schmidt, 65, has channeled his blunt assessment of the military’s tech failings into a personal campaign to revamp America’s defense forces with more engineers, more software and more A.I. In the process, the tech billionaire, who left Google last year, has reinvented himself as the prime liaison between Silicon Valley and the national security community.

Follow the link to read the full article

Big Tech

00virus-benioff-1-articleLarge-v8

Many people, perhaps with some justification, feel that big companies dominate the nation (and the world) in unhelpful ways.

That has been an American inclination since the days of the “robber barons” a century ago. Our attitudes mirror those of our ancestors.

Today, it is fashionable to bash “big anything.” At or near the top of the list has been “Big Pharma,” that was, at least, until we now look to them to find a cure for Covid-19.

Big tech has also taken a beating, with calls to break up tech companies. But big tech does more than manufacture gadgets and software. Here is an article about what Salesforce CEO Marc Benioff did to help. It begins….

Sam Hawgood, the chancellor of the University of California, San Francisco, was getting concerned.

It was March 19, coronavirus cases in California were rising at an alarming rate, and U.C.S.F., one of the Bay Area’s major medical providers, was already running perilously low on personal protective equipment.

The university’s usual suppliers in the United States were short on masks and face shields, and there was no sign that the State of California or the federal government was coming to the rescue. “The supply chain had really dried up,” Mr. Hawgood said.

So Mr. Hawgood called Marc Benioff, the hyperconnected billionaire who is a founder and the chief executive of Salesforce.

In some ways, it was the natural call to make. Mr. Benioff gave the university $100 million to build a children’s hospital in 2010 and remained a major benefactor. But there was no reason to think Mr. Benioff, who runs an enterprise software company, could quickly muster a supply chain for personal protective equipment, especially during a global pandemic.

Nonetheless, that phone call set off a frenzied effort by Mr. Benioff and his team that drew in major companies like FedEx, Walmart, Uber and Alibaba. In a matter of weeks, the team spent more than $25 million to procure more than 50 million pieces of protective equipment. Fifteen million units have already been delivered to hospitals, medical facilities and states, and more are on the way.

Follow the link to read a truly enriching article

The U.S. and China

27Schmidt2-superJumbo

Much ink has been spilled regarding the relationship between the U.S. Federal Government and the Technology Industry. Some of it has been shrill.

That is why I gravitated to a recent op-ed written by former Google CEO Eric Schmidt. The title of the piece, “Silicon Valley Needs the Government” reveals a lot.

Dr. Schmidt has street creds few others possess. He is the chairman of the National Security Commission on Artificial Intelligence and the Defense Innovation Board.

Here is his preamble to his article: “I Used to Run Google. Silicon Valley Could Lose to China. We can’t win the technology wars without the federal government’s help.” Then he continues:

America’s companies and universities innovate like no other places on earth. We are garage start-ups, risk-taking entrepreneurs and intrepid scholars exploring new advances in science and technology. But that is only part of the story.

Many of Silicon Valley’s leaders got their start with grants from the federal government — including me. My graduate work in computer science in the 1970s and ’80s was funded in part by the National Science Foundation and the Defense Advanced Research Projects Agency.

But in recent years, Americans — Silicon Valley leaders included — have put too much faith in the private sector to ensure U.S. global leadership in new technology. Now we are in a technology competition with China that has profound ramifications for our economy and defense — a reality I have come to appreciate as chairman of two government panels on innovation and national security. The government needs to get back in the game in a serious way.

Important trends are not in our favor. America’s lead in artificial intelligence, for example, is precarious. A.I. will open new frontiers in everything from biotechnology to banking, and it is also a Defense Department priority. Leading the world in A.I. is essential to growing our economy and protecting our security. A recent study considering more than 100 metrics finds that the United States is well ahead of China today but will fall behind in five to 10 years. China also has almost twice as many supercomputers and about 15 times as many deployed 5G base stations as the United States. If current trends continue, China’s overall investments in research and development are expected to surpass those of the United States within 10 years, around the same time its economy is projected to become larger than ours.

Unless these trends change, in the 2030s we will be competing with a country that has a bigger economy, more research and development investments, better research, wider deployment of new technologies and stronger computing infrastructure.

You can read the full piece here

Tech Rising?

im-141325

Last month, I blogged about technology and featured an article that asked the question, “Has Technology Peaked?

Piling on to that post, here are the most recent (February 11, 2020) from the Wall Street Journal of those companies with over a one trillion dollar valuation:

  • Microsoft: $1.44 trillion
  • Apple: $1.41 trillion
  • Amazon: $1.06 trillion
  • Alphabet: $1.04 trillion

Oh, and Facebook is the next-most highly valued company at $607 billion.

Has technology peaked? What do you think?

Has Technology Peaked?

im-141325

A great deal of ink has been spilled trying to guess where technology is moving in the future. After decades of spectacular advances, many think this progress have peaked.

I don’t think they have, and those thoughts were supported in a recent article by Andy Kessler who follows technology for the Wall Street Journal. Here is how he begins:

Does history rhyme? A century ago, the ’20s boomed, driven by consumer spending on homes, cars, radios and newfangled appliances like refrigerators, sewing machines and vacuum cleaners. Most Americans couldn’t afford the upfront cost of a lot of these goods, so manufacturers and retailers invented installment plans. Debt ruled as 75% of cars, furniture and washing machines were bought on credit.

So what’s next? My fundamental rule for finding growth trends is that you need to see viable technologies today, and then predict which ones will get cheaper and better over time. Microprocessors, storage, bandwidth—all still going strong after half a century.

The fundamental building block of the 2020s will be artificial intelligence, particularly machine learning. Better to ask what it won’t change this decade. The artificial neural networks that made face and voice recognition viable were the low-hanging fruit. Now chips tailor-made for machine learning are increasing speed and cutting costs. A recent Stanford report suggests that the power behind the most advanced AI computations is doubling nearly every three months, outpacing Moore’s Law by a factor of six.

Want more? You can read the full article here

Tech Idols?

10Bradbury-superJumbo

Who do we look up to? Movie stars? Maybe? Sports figures? Sure?

But when we think about it, those people seen different, not like us, possessing special skills.

How about technology industry leaders? Aren’t they just average Joes who were tinkering around in their garages and got lucky?

We can identify with them, so we tend to make them, so we make them our idols.

But that is changing. That’s why I was drawn to a piece, “Twilight of the Tech Idols.” Here is how it begins:

The banking industry, which has consistently been one of the wealthiest industries for the last few centuries, has very few leaders one would call “heroes” or “idols.” Most of them are part of a group of men who fought and finessed their way to the top by being good at corporate politics and managing other bankers.

Silicon Valley, in stark contrast, was built on the myth of the visionary heroic geek. A succession of Tech Heroes — from Steve Jobs at Apple and Bill Gates at Microsoft through Larry Page and Sergey Brin at Google to Mark Zuckerberg at Facebook — embodied the American dream. They were regular guys and middle-class youngsters (several of them from immigrant families), whose new technology changed the world and made them extremely wealthy.

The Tech Heroes also made for fabulous media stories. As their businesses grew, they got breathless press coverage as they promised to “disrupt” one industry or another. It nearly got to the point where if a Google founder sneezed, an article could quickly follow: “Will Google Reinvent the Sneeze?” Critics warned of troubles and monopolies ahead, but their voices were outnumbered and drowned out by the cheerleaders.

Want more? You can read the rest of the piece here

Rolling the Dice on AI

07Wu-superJumbo

There are no bombs falling on our cities, but America is at war. And the battlespace is artificial intelligence.

Our peer adversaries get this and are investing hundreds of billions of dollars to dominate the world of AI – and yes – dominate the world.

Sadly, our approach to winning this war is to let someone else – in this case, Silicon Valley – worry about it.

Tim Wu nailed it in his piece, “America’s Risky Approach to AI.” Here’s how he begins:

The brilliant 2014 science fiction novel “The Three-Body Problem,” by the Chinese writer Liu Cixin, depicts the fate of civilizations as almost entirely dependent on winning grand races to scientific milestones. Someone in China’s leadership must have read that book, for Beijing has made winning the race to artificial intelligence a national obsession, devoting billions of dollars to the cause and setting 2030 as the target year for world dominance. Not to be outdone, President Vladimir Putin of Russia recently declared that whoever masters A.I. “will become the ruler of the world.”

To be sure, the bold promises made by A.I.’s true believers can seem excessive; today’s A.I. technologies are useful only in narrow situations. But if there is even a slim chance that the race to build stronger A.I. will determine the future of the world — and that does appear to be at least a possibility — the United States and the rest of the West are taking a surprisingly lackadaisical and alarmingly risky approach to the technology.

The plan seems to be for the American tech industry, which makes most of its money in advertising and selling personal gadgets, to serve as champions of the West. Those businesses, it is hoped, will research, develop and disseminate the most important basic technologies of the future. Companies like Google, Apple and Microsoft are formidable entities, with great talent and resources that approximate those of small countries. But they don’t have the resources of large countries, nor do they have incentives that fully align with the public interest.

To exaggerate slightly: If this were 1957, we might as well be hoping that the commercial airlines would take us to the moon.

If the race for powerful A.I. is indeed a race among civilizations for control of the future, the United States and European nations should be spending at least 50 times the amount they do on public funding of basic A.I. research. Their model should be the research that led to the internet, funded by the Advanced Research Projects Agency, created by the Eisenhower administration and arguably the most successful publicly funded science project in American history.

You can read the full article here

Tech and Defense

merlin_141966030_29e7410c-6c07-48ba-b460-5a9da251aeb1-articleLarge

I served in the U.S. military at a time when we were in a technological arms race with the Soviet Union. Back then, the Department of Defense was THE leader of technological development.

That is no longer the case. It is now widely recognized that large technology companies—represented most prominently by the so-called “FAANG Five” (Facebook, Apple, Amazon, Netflix and Alphabet’s Google)—are dominating the development of technology.

I work in a U.S. Navy laboratory where we work to harness these kind of technologies to put better tools in the hands of America’s service men and women. We recognize that it is not just hardware – planes, ships, tanks and the like – that will give our warfighters the edge – but the same kind of technologies – the software – that FAANG companies and others like them develop.

To understand where we are today and fashion a way ahead, it is worth looking at where we were “back in the day” when the Department of Defense led technology development.

That is why I was drawn to read a review of a recent book: THE CODE
Silicon Valley and the Remaking of America. Here is how the review begins:

By the early 1970s, Don Hoefler, a writer for Electronic News, was spending after-hours at his “field office” — a faux-Western tavern known as Walker’s Wagon Wheel, in Mountain View, Calif. In a town with few nightspots, this was the bar of choice for engineers from the growing number of electronic and semiconductor chip firms clustered nearby.

Hoefler had a knack for slogans, having worked as a corporate publicist. In a piece published in 1971, he christened the region — better known for its prune orchards, bland buildings and cookie-cutter subdivisions — “Silicon Valley.” The name stuck, Hoefler became a legend and the region became a metonym for the entire tech sector. Today its five largest companies have a market valuation greater than the economy of the United Kingdom.

How an otherwise unexceptional swath of suburbia came to rule the world is the central question animating “The Code,” Margaret O’Mara’s accessible yet sophisticated chronicle of Silicon Valley. An academic historian blessed with a journalist’s prose, O’Mara focuses less on the actual technology than on the people and policies that ensured its success.

She digs deep into the region’s past, highlighting the critical role of Stanford University. In the immediate postwar era, Fred Terman, an electrical engineer who became Stanford’s provost, remade the school in his own image. He elevated science and engineering disciplines, enabling the university to capture federal defense dollars that helped to fuel the Cold War.

Want more? You can read the full review here

Cyber-War

merlin_139802226_c506512f-1429-494f-9502-a1f89f253517-jumbo

One of the most cutting-edge military technologies is generally called “cyber.” Most people struggle with this concept and with what “cyber-warfare” actually means.

That’s why I was intrigued by a recent book review of David Sanger’s book: “THE PERFECT WEAPON: War, Sabotage, and Fear in the Cyber Age.” Here’s how the reviewer begins:

New technologies of destruction have appeared throughout history, from the trireme and gunpowder in past centuries to biological and nuclear weapons in more modern times. Each technology goes through a cycle of development and weaponization, followed only later by the formulation of doctrine and occasionally by efforts to control the weapon’s use. The newest technological means of mayhem are cyber, meaning anything involving the electronic transmission of ones and zeros. The development of cyber capabilities has been rapid and is continuing; doctrine is largely yet to be written; and ideas about control are only beginning to emerge.

David E. Sanger’s “The Perfect Weapon” is an encyclopedic account of policy-relevant happenings in the cyberworld. Sanger, a national security correspondent for The New York Times, stays firmly grounded in real events, including communication systems getting hacked and servers being disabled. He avoids the tendency, all too common in futuristic discussions of cyber issues, to spin out elaborate and scary hypothetical scenarios. The book flows from reporting for The Times by Sanger and his colleagues, who have had access, and volunteer informants, that lesser publications rarely enjoy. The text frequently shifts to the first-person singular, along with excerpts from interviews Sanger has had with officials up to and including the president of the United States.

The principal focus of the book is cyberwarfare — the use of techniques to sabotage the electronic or physical assets of an adversary — but its scope extends as well to other controversies that flow from advances in information technology. Sanger touches on privacy issues related to the collection of signals intelligence — a business that has been around since before Franklin Roosevelt’s secretary of war, Henry Stimson, talked about gentlemen not reading each other’s mail. He also addresses social media and the problems of misuse that have bedeviled Facebook, including usage by foreign governments for political purposes. These other topics are to some extent a digression from the main topic of cyberwarfare. Intelligence collection and electronic sabotage are different phenomena, which in the United States involve very different legal principles and policy procedures. But Sanger takes note of such differences, and the book’s inclusiveness makes it useful as a one-stop reference for citizens who want to think intelligently about all issues of public policy having a cyber dimension.

You can read the full review here