We Have An App

BN-QX082_bkrvla_JV_20161121163854

Few writers have as much of a knack for taking difficult subjects – especially technology – and making them understandable for the lay person. Tom Friedman is one of those people.

I read his book, “Thank You for Being Late” some time ago, and found it interesting and enlightening. However, I never really felt I was able to capture succinctly just what the book was about. Then I came across an old review of the book in the Wall Street Journal. Here’s how it began:

Change is nothing new. Nobel laureate Bob Dylan sang that the times they were a-changin’ back in 1964. What has changed is the pace of change: “The three largest forces on the planet—technology, globalization, and climate change—are all accelerating at once,” notes New York Times columnist Thomas L. Friedman in “Thank You for Being Late: An Optimist’s Guide to Thriving in the Age of Accelerations.” Gradual change allows for adaptation; one generation figures out trains, another airplanes. Now, in a world where taxi-cab regulators will figure out Uber just in time for self-driving cars to render such services obsolete, “so many aspects of our societies, workplaces, and geopolitics are being reshaped and need to be reimagined.” All of it creates a sense of discomfort and provokes backlash—witness Brexit and the American presidential election. Yet there is cause for optimism, Mr. Friedman believes. Humans are crafty creatures.

In this book, Mr. Friedman tries to press pause. The title comes from the author’s exclamation to a tardy breakfast companion: The unexpected downtime had given him an opportunity to reflect. If we all take such time to think, he claims, we can figure out how to “dance in a hurricane.” It’s a comforting idea, though one wonders why, if Mr. Friedman was so happy for this pre-breakfast downtime, he was busily scheduling daily breakfast meetings in the first place. Likewise, this ambitious book, while compelling in places, skips about a lot. His attempt to cover much of the history of modern technology, for instance, quickly descends into gee-whiz moments and ubiquitous exclamation points. Big-belly garbage cans have sensors that wirelessly announce when they need to be emptied, and so Mr. Friedman marvels that “yes, even the garbageman is a tech worker now. . . . That garbage can could take an SAT exam!”

Want to read more

Tech and the Military

04GOOGLE-master768-v4

What fuels the U.S. military today isn’t hardware, but software. And it’s not just the kind of software you use on your home computer or your video games.

Today’s military arms race involves artificial intelligence and machine learning. And the U.S. companies leading that effort are the big tech companies: Alphabet, Google, Facebook and others.

The U.S. military has gone to these companies for one reason – so our warfighters have an edge against an adversary.

It was almost inevitable that challenges would come up from this uneasy marriage – and now they have.

Here is how a recent article, “A Google Military Project Fuels Internal Dissent,” begins, and this may just be the tip of iceberg:

Thousands of Google employees, including dozens of senior engineers, have signed a letter protesting the company’s involvement in a Pentagon program that uses artificial intelligence to interpret video imagery and could be used to improve the targeting of drone strikes.

The letter, which is circulating inside Google and has garnered more than 3,100 signatures, reflects a culture clash between Silicon Valley and the federal government that is likely to intensify as cutting-edge artificial intelligence is increasingly employed for military purposes.

“We believe that Google should not be in the business of war,” says the letter, addressed to Sundar Pichai, the company’s chief executive. It asks that Google pull out of Project Maven, a Pentagon pilot program, and announce a policy that it will not “ever build warfare technology.”

You can read the full review here

Too Big?

25mag-google4-cov-articleLarge

Over the past several weeks, Facebook has dominated the news, with her CEO testifying on Capitol Hill in front of angry lawmakers.

But another tech firm is under the same – even greater scrutiny – in the same way large mega-companies have been for most of our country’s recorded history.

Critics say the search giant is squelching competition before it begins. Should the government step in? Charles Duhigg sheds some light. Here is part of what he says:

Google has succeeded where Genghis Khan, communism and Esperanto all failed: It dominates the globe. Though estimates vary by region, the company now accounts for an estimated 87 percent of online searches worldwide. It processes trillions of queries each year, which works out to at least 5.5 billion a day, 63,000 a second. So odds are good that sometime in the last week, or last hour, or last 10 minutes, you’ve used Google to answer a nagging question or to look up a minor fact, and barely paused to consider how near-magical it is that almost any bit of knowledge can be delivered to you faster than you can type the request. If you’re old enough to remember the internet before 1998, when Google was founded, you’ll recall what it was like when searching online involved AltaVista or Lycos and consistently delivered a healthy dose of spam or porn. (Pity the early web enthusiasts who innocently asked Jeeves about “amateurs” or “steel.”)

In other words, it’s very likely you love Google, or are at least fond of Google, or hardly think about Google, the same way you hardly think about water systems or traffic lights or any of the other things you rely on every day. Therefore you might have been surprised when headlines began appearing last year suggesting that Google and its fellow tech giants were threatening everything from our economy to democracy itself. Lawmakers have accused Google of creating an automated advertising system so vast and subtle that hardly anyone noticed when Russian saboteurs co-opted it in the last election. Critics say Facebook exploits our addictive impulses and silos us in ideological echo chambers. Amazon’s reach is blamed for spurring a retail meltdown; Apple’s economic impact is so profound it can cause market-wide gyrations. These controversies point to the growing anxiety that a small number of technology companies are now such powerful entities that they can destroy entire industries or social norms with just a few lines of computer code. Those four companies, plus Microsoft, make up America’s largest sources of aggregated news, advertising, online shopping, digital entertainment and the tools of business and communication. They’re also among the world’s most valuable firms, with combined annual revenues of more than half a trillion dollars.

Want more? You can read the full piece here

2001 at 50

BN-XU230_KUBRIC_M_20180308174915

Any votes for the most prescient film of the last century? One that looked ahead to a future that most could only dimly perceive.

My vote is for Stanly Kubrick’s “2001: A Space Odyssey.” Forward-looking only begins to describe this work. Here is how Michael Benson begins his piece in the Wall Street Journal:

Fifty years ago, invitation-only audiences gathered in specially equipped Cinerama theaters in Washington, New York and Los Angeles to preview a widescreen epic that director Stanley Kubrick had been working on for four years. Conceived in collaboration with the science-fiction writer Arthur C. Clarke, “2001: A Space Odyssey” was way over budget, and Hollywood rumor held that MGM had essentially bet the studio on the project.

The film’s previews were an unmitigated disaster. Its story line encompassed an exceptional temporal sweep, starting with the initial contact between pre-human ape-men and an omnipotent alien civilization and then vaulting forward to later encounters between Homo sapiens and the elusive aliens, represented throughout by the film’s iconic metallic-black monolith. Although featuring visual effects of unprecedented realism and power, Kubrick’s panoramic journey into space and time made few concessions to viewer understanding. The film was essentially a nonverbal experience. Its first words came only a good half-hour in.

Audience walkouts numbered well over 200 at the New York premiere on April 3, 1968, and the next day’s reviews were almost uniformly negative. Writing in the Village Voice, Andrew Sarris called the movie “a thoroughly uninteresting failure and the most damning demonstration yet of Stanley Kubrick’s inability to tell a story coherently and with a consistent point of view.” And yet that afternoon, a long line—comprised predominantly of younger people—extended down Broadway, awaiting the first matinee.

Stung by the initial reactions and under great pressure from MGM, Kubrick soon cut almost 20 minutes from the film. Although “2001” remained willfully opaque and open to interpretation, the trims removed redundancies, and the film spoke more clearly. Critics began to come around. In her review for the Boston Globe, Marjorie Adams, who had seen the shortened version, called it “the world’s most extraordinary film. Nothing like it has ever been shown in Boston before, or for that matter, anywhere. The film is as exciting as the discovery of a new dimension in life.”

Fifty years later, “2001: A Space Odyssey” is widely recognized as ranking among the most influential movies ever made. The most respected poll of such things, conducted every decade by the British Film Institute’s Sight & Sound magazine, asks the world’s leading directors and critics to name the 100 greatest films of all time. The last BFI decadal survey, conducted in 2012, placed it at No. 2 among directors and No. 6 among critics. Not bad for a film that critic Pauline Kael had waited a contemptuous 10 months before dismissing as “trash masquerading as art” in the pages of Harper’s.

Want to read more

Turning up the Gain on AI

shutterstock_523968604

The United States is at war with China. No, it’s not the trade war. It is the war to dominate artificial intelligence, or AI.

Earlier this month, in my blog post, AI on the March, I described the enormous strides China is making in AI. Their progress – and plans for future development of AI – are ambitious and sobering.

The United States isn’t standing still. The Center for a New American Security (CNAS) recently announced the launch of its Task Force on Artificial Intelligence and National Security which will examine how the United States should respond to the national security challenges posed by artificial intelligence. The task force will be chaired by former Deputy Secretary of Defense Robert O. Work, and Dr. Andrew Moore, Dean of the School of Computer Science at Carnegie Mellon University.

The task force will draw together private industry leaders, former senior government officials, and academic experts to take on the challenges of the AI revolution,” said CNAS Senior Fellow Paul Scharre, who will serve as executive director of the AI Task Force. “I am thrilled to have such an impressive roster of national security leaders and artificial intelligence experts join us in this endeavor.”

“We find ourselves on the leading edge of new industrial and military revolutions, powered by AI; machine learning; and autonomous, unmanned systems and robots,” said Secretary Work. “The United States must consider and prepare for the associated national security challenges – whether in cyber-security, surveillance, disinformation, or defense. CNAS’ AI Task Force will help frame the policy issues surrounding these unique challenges.”

Task force Co-Chair Dr. Andrew Moore said that a key tenet of this signature initiative rests in the importance of human judgment. “Central to all of this is ensuring that such systems work with humans in a way which empowers the human, not replaces the human, and which keeps ultimate decision authority with the human. That is why I am so excited by the mission of the task force.”

AI on the March

AIVacuum-sub-articleLarge

Few would dispute the benefits that AI and Machine Learning can convey. AI surrounds us in all we do and impacts more-and-more of our daily life.

American companies like Amazon and Google have done more than anyone to turn A.I. concepts into real products. But for a number of reasons, much of the critical research being done on artificial intelligence is already migrating to other countries, with China poised to take over that leadership role. In July, China unveiled a plan to become the world leader in artificial intelligence and create an industry worth $150 billion to its economy by 2030.

To technologists working on A.I. in the United States, the statement, which was 28 pages long in its English translation, was a direct challenge to America’s lead in arguably the most important tech research to come along in decades. It outlined the Chinese government’s aggressive plan to treat A.I. like the country’s own version of the Apollo 11 lunar mission — an all-in effort that could stoke national pride and spark agenda-setting technology breakthroughs.

The manifesto was also remarkably similar to several reports on the future of artificial intelligence released by the Obama administration at the end of 2016.

“It is remarkable to see how A.I. has emerged as a top priority for the Chinese leadership and how quickly things have been set into motion,” said Elsa Kania, an adjunct fellow at the Center for a New American Security who helped translate the manifesto and follows China’s work on artificial intelligence. “The U.S. plans and policies released in 2016 were seemingly the impetus for the formulation of China’s national A.I. strategy.”

Want more? You can read the full article here.

Our New Rulers

12STATE1-superJumbo

Much ink has been spilled about the enormous, most would say outsize, impact that the biggest technology companies have on our lives.

So much of this commentary has been shrill, so when a thoughtful article on the subject appears, it’s worth highlighting.

Farhad Manjoo nailed it in his piece, “The Frightful Five Want to Rule Entertainment. They Are Hitting Limits.” Here is how he begins:

The tech giants are too big. Other than Donald J. Trump, that’s the defining story of 2017, the meta-narrative lurking beneath every other headline.

The companies I call the Frightful Five — Amazon, Apple, Facebook, Microsoft and Alphabet, Google’s parent company — have experienced astounding growth over the last few years, making them the world’s five most valuable public companies. Because they own the technology that will dominate much of life for the foreseeable future, they are also gaining vast social and political power over much of the world beyond tech.

Now that world is scrambling to figure out what to do about them. And it is discovering that the changes they are unleashing — in the economy, in civic and political life, in arts and entertainment, and in our tech-addled psyches — are not simple to comprehend, let alone to limit.

I’ve spent the last few years studying the rise of these giants. As tensions over their power reached a high boil this summer — Facebook and Russia, Google and sexism, Amazon and Whole Foods — I began thinking more about the nature and consequence of their power, and talking to everyone I could find about these companies. Among them were people in the tech industry, as well as many in other power centers: Washington, Hollywood, the media, the health care and automotive businesses, and other corners of society that may soon be ensnared by one or more of the Five.

Want to read more

Digital World

19sax-master768

Those of us “of a certain age” recall analog. Everything was analog. Then along came digital, and analog and digital coexisted more or less peacefully. Then digital took over.

And digital brought previously-unimaginable benefits and showered us with products we didn’t even know we needed. And with it came help-lines so experts could help us use our devices.

That’s why I found David Sax’s piece “Our Love Affair with Digital is Over,” so on-point and trendsetting, heralding a move back toward analog. Here is part of what he shared:

A decade ago I bought my first smartphone, a clunky little BlackBerry 8830 that came in a sleek black leather sheath. I loved that phone. I loved the way it effortlessly slid in and out of its case, loved the soft purr it emitted when an email came in, loved the silent whoosh of its trackball as I played Brick Breaker on the subway and the feel of its baby keys clicking under my fat thumbs. It was the world in my hands, and when I had to turn it off, I felt anxious and alone.

Like most relationships we plunge into with hearts aflutter, our love affair with digital technology promised us the world: more friends, money and democracy! Free music, news and same-day shipping of paper towels! A laugh a minute, and a constant party at our fingertips.

Many of us bought into the fantasy that digital made everything better. We surrendered to this idea, and mistook our dependence for romance, until it was too late.

Today, when my phone is on, I feel anxious and count down the hours to when I am able to turn it off and truly relax. The love affair I once enjoyed with digital technology is over — and I know I’m not alone.

Ten years after the iPhone first swept us off our feet, the growing mistrust of computers in both our personal lives and the greater society we live in is inescapable. This publishing season is flush with books raising alarms about digital technology’s pernicious effects on our lives: what smartphones are doing to our children; how Facebook and Twitter are eroding our democratic institutions; and the economic effects of tech monopolies.

Want more? You can read the full article here

Success?

Laptop Apple Computer Mac Home Office Office Ipad

Earlier this month, I posted a blog that began: “By almost any measure, the U.S. and the world economy are booming. We seem to have moved well-beyond the 2008 recession and are moving forward on all cylinders.”

And who is leading the pack? Who is not just in the top 1%, but in the top .1%, or even more decimal places to the right? It’s Silicon Valley’s tech billionaires.

Everyone wants to be them, right. Well, maybe not. That’s why I found Nellie Bowles piece, “Soothing the Sting of Success,” so interesting. Here is how the lead-in to the online version began:

“Where Silicon Valley Is Going to Get in Touch With Its Soul: The Esalen Institute, a storied hippie hotel in Big Sur, Calif., has reopened with a mission to help technologists who discover that “inside they’re hurting.”

Who knew?

The article goes on:

Silicon Valley, facing a crisis of the soul, has found a retreat center.

It has been a hard year for the tech industry. Prominent figures like Sean Parker and Justin Rosenstein, horrified by what technology has become, have begun to publicly denounce companies like Facebook that made them rich.

And so Silicon Valley has come to the Esalen Institute, a storied hippie hotel here on the Pacific coast south of Carmel, Calif. After storm damage in the spring and a skeleton crew in the summer, the institute was fully reopened in October with a new director and a new mission: It will be a home for technologists to reckon with what they have built.

This is a radical change for the rambling old center. Founded in 1962, the nonprofit helped bring yoga, organic food and meditation into the American mainstream.

Want more? You can read the full piece.

Artificial Intelligence

26mag-explicableai-image1-articleLarge

Few technologies have had a big an impact – and promise to have more in the future – than artificial intelligence, or AI.

That’s why it was no surprise that the New York Times Magazine featured an article entitled, “Can A.I. Be Taught to Explain Itself.” For me, it was riveting. Some excerpts:

It has become commonplace to hear that machines, armed with machine learning, can outperform humans at decidedly human tasks, from playing Go to playing “Jeopardy!” We assume that is because computers simply have more data-crunching power than our soggy three-pound brains. Kosinski’s results suggested something stranger: that artificial intelligences often excel by developing whole new ways of seeing, or even thinking, that are inscrutable to us. It’s a more profound version of what’s often called the “black box” problem — the inability to discern exactly what machines are doing when they’re teaching themselves novel skills — and it has become a central concern in artificial-intelligence research. In many arenas, A.I. methods have advanced with startling speed; deep neural networks can now detect certain kinds of cancer as accurately as a human. But human doctors still have to make the decisions — and they won’t trust an A.I. unless it can explain itself.

“Artificial intelligence” is a misnomer, an airy and evocative term that can be shaded with whatever notions we might have about what “intelligence” is in the first place. Researchers today prefer the term “machine learning,” which better describes what makes such algorithms powerful.

The idea was to connect leading A.I. researchers with experts in data visualization and human-computer interaction to see what new tools they might invent to find patterns in huge sets of data. There to judge the ideas, and act as hypothetical users, were analysts for the C.I.A., the N.S.A. and sundry other American intelligence agencies.

Even if a machine made perfect decisions, a human would still have to take responsibility for them — and if the machine’s rationale was beyond reckoning, that could never happen.

Intrigued? You can read the full article here