Has Technology Peaked?

im-141325

A great deal of ink has been spilled trying to guess where technology is moving in the future. After decades of spectacular advances, many think this progress have peaked.

I don’t think they have, and those thoughts were supported in a recent article by Andy Kessler who follows technology for the Wall Street Journal. Here is how he begins:

Does history rhyme? A century ago, the ’20s boomed, driven by consumer spending on homes, cars, radios and newfangled appliances like refrigerators, sewing machines and vacuum cleaners. Most Americans couldn’t afford the upfront cost of a lot of these goods, so manufacturers and retailers invented installment plans. Debt ruled as 75% of cars, furniture and washing machines were bought on credit.

So what’s next? My fundamental rule for finding growth trends is that you need to see viable technologies today, and then predict which ones will get cheaper and better over time. Microprocessors, storage, bandwidth—all still going strong after half a century.

The fundamental building block of the 2020s will be artificial intelligence, particularly machine learning. Better to ask what it won’t change this decade. The artificial neural networks that made face and voice recognition viable were the low-hanging fruit. Now chips tailor-made for machine learning are increasing speed and cutting costs. A recent Stanford report suggests that the power behind the most advanced AI computations is doubling nearly every three months, outpacing Moore’s Law by a factor of six.

Want more? You can read the full article here

Tech Idols?

10Bradbury-superJumbo

Who do we look up to? Movie stars? Maybe? Sports figures? Sure?

But when we think about it, those people seen different, not like us, possessing special skills.

How about technology industry leaders? Aren’t they just average Joes who were tinkering around in their garages and got lucky?

We can identify with them, so we tend to make them, so we make them our idols.

But that is changing. That’s why I was drawn to a piece, “Twilight of the Tech Idols.” Here is how it begins:

The banking industry, which has consistently been one of the wealthiest industries for the last few centuries, has very few leaders one would call “heroes” or “idols.” Most of them are part of a group of men who fought and finessed their way to the top by being good at corporate politics and managing other bankers.

Silicon Valley, in stark contrast, was built on the myth of the visionary heroic geek. A succession of Tech Heroes — from Steve Jobs at Apple and Bill Gates at Microsoft through Larry Page and Sergey Brin at Google to Mark Zuckerberg at Facebook — embodied the American dream. They were regular guys and middle-class youngsters (several of them from immigrant families), whose new technology changed the world and made them extremely wealthy.

The Tech Heroes also made for fabulous media stories. As their businesses grew, they got breathless press coverage as they promised to “disrupt” one industry or another. It nearly got to the point where if a Google founder sneezed, an article could quickly follow: “Will Google Reinvent the Sneeze?” Critics warned of troubles and monopolies ahead, but their voices were outnumbered and drowned out by the cheerleaders.

Want more? You can read the rest of the piece here

Rolling the Dice on AI

07Wu-superJumbo

There are no bombs falling on our cities, but America is at war. And the battlespace is artificial intelligence.

Our peer adversaries get this and are investing hundreds of billions of dollars to dominate the world of AI – and yes – dominate the world.

Sadly, our approach to winning this war is to let someone else – in this case, Silicon Valley – worry about it.

Tim Wu nailed it in his piece, “America’s Risky Approach to AI.” Here’s how he begins:

The brilliant 2014 science fiction novel “The Three-Body Problem,” by the Chinese writer Liu Cixin, depicts the fate of civilizations as almost entirely dependent on winning grand races to scientific milestones. Someone in China’s leadership must have read that book, for Beijing has made winning the race to artificial intelligence a national obsession, devoting billions of dollars to the cause and setting 2030 as the target year for world dominance. Not to be outdone, President Vladimir Putin of Russia recently declared that whoever masters A.I. “will become the ruler of the world.”

To be sure, the bold promises made by A.I.’s true believers can seem excessive; today’s A.I. technologies are useful only in narrow situations. But if there is even a slim chance that the race to build stronger A.I. will determine the future of the world — and that does appear to be at least a possibility — the United States and the rest of the West are taking a surprisingly lackadaisical and alarmingly risky approach to the technology.

The plan seems to be for the American tech industry, which makes most of its money in advertising and selling personal gadgets, to serve as champions of the West. Those businesses, it is hoped, will research, develop and disseminate the most important basic technologies of the future. Companies like Google, Apple and Microsoft are formidable entities, with great talent and resources that approximate those of small countries. But they don’t have the resources of large countries, nor do they have incentives that fully align with the public interest.

To exaggerate slightly: If this were 1957, we might as well be hoping that the commercial airlines would take us to the moon.

If the race for powerful A.I. is indeed a race among civilizations for control of the future, the United States and European nations should be spending at least 50 times the amount they do on public funding of basic A.I. research. Their model should be the research that led to the internet, funded by the Advanced Research Projects Agency, created by the Eisenhower administration and arguably the most successful publicly funded science project in American history.

You can read the full article here

Tech and Defense

merlin_141966030_29e7410c-6c07-48ba-b460-5a9da251aeb1-articleLarge

I served in the U.S. military at a time when we were in a technological arms race with the Soviet Union. Back then, the Department of Defense was THE leader of technological development.

That is no longer the case. It is now widely recognized that large technology companies—represented most prominently by the so-called “FAANG Five” (Facebook, Apple, Amazon, Netflix and Alphabet’s Google)—are dominating the development of technology.

I work in a U.S. Navy laboratory where we work to harness these kind of technologies to put better tools in the hands of America’s service men and women. We recognize that it is not just hardware – planes, ships, tanks and the like – that will give our warfighters the edge – but the same kind of technologies – the software – that FAANG companies and others like them develop.

To understand where we are today and fashion a way ahead, it is worth looking at where we were “back in the day” when the Department of Defense led technology development.

That is why I was drawn to read a review of a recent book: THE CODE
Silicon Valley and the Remaking of America. Here is how the review begins:

By the early 1970s, Don Hoefler, a writer for Electronic News, was spending after-hours at his “field office” — a faux-Western tavern known as Walker’s Wagon Wheel, in Mountain View, Calif. In a town with few nightspots, this was the bar of choice for engineers from the growing number of electronic and semiconductor chip firms clustered nearby.

Hoefler had a knack for slogans, having worked as a corporate publicist. In a piece published in 1971, he christened the region — better known for its prune orchards, bland buildings and cookie-cutter subdivisions — “Silicon Valley.” The name stuck, Hoefler became a legend and the region became a metonym for the entire tech sector. Today its five largest companies have a market valuation greater than the economy of the United Kingdom.

How an otherwise unexceptional swath of suburbia came to rule the world is the central question animating “The Code,” Margaret O’Mara’s accessible yet sophisticated chronicle of Silicon Valley. An academic historian blessed with a journalist’s prose, O’Mara focuses less on the actual technology than on the people and policies that ensured its success.

She digs deep into the region’s past, highlighting the critical role of Stanford University. In the immediate postwar era, Fred Terman, an electrical engineer who became Stanford’s provost, remade the school in his own image. He elevated science and engineering disciplines, enabling the university to capture federal defense dollars that helped to fuel the Cold War.

Want more? You can read the full review here

Cyber-War

merlin_139802226_c506512f-1429-494f-9502-a1f89f253517-jumbo

One of the most cutting-edge military technologies is generally called “cyber.” Most people struggle with this concept and with what “cyber-warfare” actually means.

That’s why I was intrigued by a recent book review of David Sanger’s book: “THE PERFECT WEAPON: War, Sabotage, and Fear in the Cyber Age.” Here’s how the reviewer begins:

New technologies of destruction have appeared throughout history, from the trireme and gunpowder in past centuries to biological and nuclear weapons in more modern times. Each technology goes through a cycle of development and weaponization, followed only later by the formulation of doctrine and occasionally by efforts to control the weapon’s use. The newest technological means of mayhem are cyber, meaning anything involving the electronic transmission of ones and zeros. The development of cyber capabilities has been rapid and is continuing; doctrine is largely yet to be written; and ideas about control are only beginning to emerge.

David E. Sanger’s “The Perfect Weapon” is an encyclopedic account of policy-relevant happenings in the cyberworld. Sanger, a national security correspondent for The New York Times, stays firmly grounded in real events, including communication systems getting hacked and servers being disabled. He avoids the tendency, all too common in futuristic discussions of cyber issues, to spin out elaborate and scary hypothetical scenarios. The book flows from reporting for The Times by Sanger and his colleagues, who have had access, and volunteer informants, that lesser publications rarely enjoy. The text frequently shifts to the first-person singular, along with excerpts from interviews Sanger has had with officials up to and including the president of the United States.

The principal focus of the book is cyberwarfare — the use of techniques to sabotage the electronic or physical assets of an adversary — but its scope extends as well to other controversies that flow from advances in information technology. Sanger touches on privacy issues related to the collection of signals intelligence — a business that has been around since before Franklin Roosevelt’s secretary of war, Henry Stimson, talked about gentlemen not reading each other’s mail. He also addresses social media and the problems of misuse that have bedeviled Facebook, including usage by foreign governments for political purposes. These other topics are to some extent a digression from the main topic of cyberwarfare. Intelligence collection and electronic sabotage are different phenomena, which in the United States involve very different legal principles and policy procedures. But Sanger takes note of such differences, and the book’s inclusiveness makes it useful as a one-stop reference for citizens who want to think intelligently about all issues of public policy having a cyber dimension.

You can read the full review here

Trusting AI

06MarcusDavis-superJumbo

Few technologies inspire more controversy than artificial intelligence. Some hail it as a savior, others predict it will spell our doom.

That’s why I was drawn to a recent op-ed, “Build AI we can trust.” Here’s how the two writers begin:

Artificial intelligence has a trust problem. We are relying on A.I. more and more, but it hasn’t yet earned our confidence.

Tesla cars driving in Autopilot mode, for example, have a troubling history of crashing into stopped vehicles. Amazon’s facial recognition system works great much of the time, but when asked to compare the faces of all 535 members of Congress with 25,000 public arrest photos, it found 28 matches, when in reality there were none. A computer program designed to vet job applicants for Amazon was discovered to systematically discriminate against women. Every month new weaknesses in A.I. are uncovered.

The problem is not that today’s A.I. needs to get better at what it does. The problem is that today’s A.I. needs to try to do something completely different.

In particular, we need to stop building computer systems that merely get better and better at detecting statistical patterns in data sets — often using an approach known as deep learning — and start building computer systems that from the moment of their assembly innately grasp three basic concepts: time, space and causality.

Today’s A.I. systems know surprisingly little about any of these concepts. Take the idea of time. We recently searched on Google for “Did George Washington own a computer?” — a query whose answer requires relating two basic facts (when Washington lived, when the computer was invented) in a single temporal framework. None of Google’s first 10 search results gave the correct answer. The results didn’t even really address the question. The highest-ranked link was to a news story in The Guardian about a computerized portrait of Martha Washington as she might have looked as a young woman.

Check out this link to read more

Leading Technology

im-103755

Much ink has been spilled regarding the challenges the United States faces in our military technology race with potential adversaries like China and Russia.

One of the best analysts regarding this issue is Mackenzie Eaglen. Here is part of what she said in a receipt op-ed:

In the global arms race, a moment’s hesitation is enough to lose your lead. The Pentagon pioneered research 15 years ago into hypersonic missiles that can cruise at Mach 5. The U.S. then chose not to develop the technology—but China and Russia developed it. Now Beijing and Moscow have hypersonics at the ready and, according to Pentagon research chief Michael D. Griffin, no number of current U.S. ships or ground-based antimissile systems would be enough to counter a massive attack.

The problem stems in part from the Pentagon’s increasing dependence on outside firms. For decades after World War II, the Defense Department was a producer of cutting-edge research and technology, but today it contracts more and more out to Silicon Valley. No longer setting its own course for development, the Pentagon is unable to take the major leaps that once kept U.S. military technology racing ahead.

The Pentagon still acquires its systems in accordance with decades-old protocols that value compliance over nimbleness and usefulness. It has doubled down on unreasonable demands to own intellectual property in perpetuity, a nonstarter for many software companies with which it contracts. Now defense leaders are stuck having to sort out which software systems might pose a security risk because the developers often also sell to America’s rivals.

This shift from calling the shots to negotiating with ever-more-private interests is new for the defense bureaucracy. For generations, influence flowed in the other direction. The buildup in defense research-and-development spending that began in the late 1940s and continued through the ’80s was responsible for propelling many of the tech breakthroughs of the past century: cellphones, jet engines, integrated circuits, weather satellites and the Global Positioning System. A recent example is Apple ’s Siri artificial-intelligence system, which it purchased from the Defense Advanced Research Projects Agency.

You can read the full article here

Harnessing Technology to Make War Safer

28kunce-superJumbo

Few subjects inspire more furious debate than the terms “war” and “drones.” There is vastly more heat than light on this subject.

That is why I was impressed by the reasoned arguments of a U.S. Marine who wrote an article entitled, “How Tech Can Make War Safer.”

Amidst all the shrill debate on the subject, Lucas Kunce explained how when the tech industry refuses to work on defense-related projects, war becomes less safe Here’s how he begins:

Last year, more than 4,600 Google employees signed a petition urging the company to commit to refusing to build weapons technology. A response to Google’s work with the military on an artificial intelligence-based targeting system, the petition made a powerful and seemingly simple moral statement: “We believe that Google should not be in the business of war.” Similarly, Microsoft employees in February demanded that their company withhold its augmented reality HoloLens headset technology from the Army, saying they did not want to become “war profiteers.”

As a Marine who has been in harm’s way a few times, I am glad that my peers in the tech industry have initiated this discussion. America is long overdue for a conversation about how we engage in war and peace; the difference between the decision to go to war and decisions about what happens on the battlefield during warfare; and what it means to fight, die and kill for our country.

My job has put me in places where I have witnessed and taken part in significant battlefield decisions. From my experience, I have learned that working with the military to develop systems would actually support the tech workers’ goal to reduce harm in warfare. (I need to note here that I am speaking for myself, and my views do not necessarily reflect those of the Department of Defense.)

Tech workers might not realize that their opposition to the work their companies do on military technology does not change the decision-making of the American leaders who choose to go to war, and therefore is unlikely to prevent any harm caused by war. Instead, it has the unintended effect of imperiling not only the lives of service members, but also the lives of innocent civilians whom I believe these workers want to protect.

You can read the full article here

Our Phones – Ourselves

27newport-superJumbo

Are you reading this on your phone? It’s likely that you are and that your smart phone is such a constant companion that it is on your person 24/7.

Was this the plan when Steve Jobs first introduced this magical device? Not at all suggests Cal Newport. Here is how he began his insightful piece:

Smartphones are our constant companions. For many of us, their glowing screens are a ubiquitous presence, drawing us in with endless diversions, like the warm ping of social approval delivered in the forms of likes and retweets, and the algorithmically amplified outrage of the latest “breaking” news or controversy. They’re in our hands, as soon as we wake, and command our attention until the final moments before we fall asleep.

Steve Jobs would not approve.

In 2007, Mr. Jobs took the stage at the Moscone Convention Center in San Francisco and introduced the world to the iPhone. If you watch the full speech, you’ll be surprised by how he imagined our relationship with this iconic invention, because this vision is so different from the way most of us use these devices now.

In the remarks, after discussing the phone’s interface and hardware, he spends an extended amount of time demonstrating how the device leverages the touch screen before detailing the many ways Apple engineers improved the age-old process of making phone calls. “It’s the best iPod we’ve ever made,” Mr. Jobs exclaims at one point. “The killer app is making calls,” he later adds. Both lines spark thunderous applause. He doesn’t dedicate any significant time to discussing the phone’s internet connectivity features until more than 30 minutes into the address.

The presentation confirms that Mr. Jobs envisioned a simpler and more constrained iPhone experience than the one we actually have over a decade later. For example, he doesn’t focus much on apps. When the iPhone was first introduced there was no App Store, and this was by design. As Andy Grignon, an original member of the iPhone team, told me when I was researching this topic, Mr. Jobs didn’t trust third-party developers to offer the same level of aesthetically pleasing and stable experiences that Apple programmers could produce. He was convinced that the phone’s carefully designed native features were enough. It was “an iPod that made phone calls,” Mr. Grignon said to me.

Mr. Jobs seemed to understand the iPhone as something that would help us with a small number of activities — listening to music, placing calls, generating directions. He didn’t seek to radically change the rhythm of users’ daily lives. He simply wanted to take experiences we already found important and make them better.

The minimalist vision for the iPhone he offered in 2007 is unrecognizable today — and that’s a shame.

Under what I call the “constant companion model,” we now see our smartphones as always-on portals to information. Instead of improving activities that we found important before this technology existed, this model changes what we pay attention to in the first place — often in ways designed to benefit the stock price of attention-economy conglomerates, not our satisfaction and well-being.

Want more? You can read the full article here

Hold the Obits!

Opinion The Comeback of the Century The New York Times

Over the past decade, countless obituaries have been written for books. Many were convinced the printed book would soon suffer the same fate as the dinosaurs.

Hold the obits!

The printed book is making a huge comeback. I had been aware of this in bits and pieces, but am grateful to Timothy for helping with some perspective. Here’s how he began a recent op-ed:

Not long ago I found myself inside the hushed and high-vaulted interior of a nursing home for geriatric books, in the forgotten city of St.-Omer, France. Running my white-gloved hands over the pages of a thousand-year-old manuscript, I was amazed at the still-bright colors applied long ago in a chilly medieval scriptorium. Would anything written today still be around to touch in another millennium?

In the digital age, the printed book has experienced more than its share of obituaries. Among the most dismissive was one from Steve Jobs, who said in 2008, “It doesn’t matter how good or bad the product is, the fact is that people don’t read anymore.”

True, nearly one in four adults in this country has not read a book in the last year. But the book — with a spine, a unique scent, crisp pages and a typeface that may date to Shakespeare’s day — is back. Defying all death notices, sales of printed books continue to rise to new highs, as do the number of independent stores stocked with these voices between covers, even as sales of electronic versions are declining.

Nearly three times as many Americans read a book of history in 2017 as watched the first episode of the final season of “Game of Thrones.” The share of young adults who read poetry in that year more than doubled from five years earlier. A typical rage tweet by President Trump, misspelled and grammatically sad, may get him 100,000 “likes.” Compare that with the 28 million Americans who read a book of verse in the first year of Trump’s presidency, the highest share of the population in 15 years.

So, even with a president who is ahistoric, borderline literate and would fail a sixth-grade reading comprehension test, something wonderful and unexpected is happening in the language arts. When the dominant culture goes low, the saviors of our senses go high.

Want more? You can read it here