2001 at 50

BN-XU230_KUBRIC_M_20180308174915

Any votes for the most prescient film of the last century? One that looked ahead to a future that most could only dimly perceive.

My vote is for Stanly Kubrick’s “2001: A Space Odyssey.” Forward-looking only begins to describe this work. Here is how Michael Benson begins his piece in the Wall Street Journal:

Fifty years ago, invitation-only audiences gathered in specially equipped Cinerama theaters in Washington, New York and Los Angeles to preview a widescreen epic that director Stanley Kubrick had been working on for four years. Conceived in collaboration with the science-fiction writer Arthur C. Clarke, “2001: A Space Odyssey” was way over budget, and Hollywood rumor held that MGM had essentially bet the studio on the project.

The film’s previews were an unmitigated disaster. Its story line encompassed an exceptional temporal sweep, starting with the initial contact between pre-human ape-men and an omnipotent alien civilization and then vaulting forward to later encounters between Homo sapiens and the elusive aliens, represented throughout by the film’s iconic metallic-black monolith. Although featuring visual effects of unprecedented realism and power, Kubrick’s panoramic journey into space and time made few concessions to viewer understanding. The film was essentially a nonverbal experience. Its first words came only a good half-hour in.

Audience walkouts numbered well over 200 at the New York premiere on April 3, 1968, and the next day’s reviews were almost uniformly negative. Writing in the Village Voice, Andrew Sarris called the movie “a thoroughly uninteresting failure and the most damning demonstration yet of Stanley Kubrick’s inability to tell a story coherently and with a consistent point of view.” And yet that afternoon, a long line—comprised predominantly of younger people—extended down Broadway, awaiting the first matinee.

Stung by the initial reactions and under great pressure from MGM, Kubrick soon cut almost 20 minutes from the film. Although “2001” remained willfully opaque and open to interpretation, the trims removed redundancies, and the film spoke more clearly. Critics began to come around. In her review for the Boston Globe, Marjorie Adams, who had seen the shortened version, called it “the world’s most extraordinary film. Nothing like it has ever been shown in Boston before, or for that matter, anywhere. The film is as exciting as the discovery of a new dimension in life.”

Fifty years later, “2001: A Space Odyssey” is widely recognized as ranking among the most influential movies ever made. The most respected poll of such things, conducted every decade by the British Film Institute’s Sight & Sound magazine, asks the world’s leading directors and critics to name the 100 greatest films of all time. The last BFI decadal survey, conducted in 2012, placed it at No. 2 among directors and No. 6 among critics. Not bad for a film that critic Pauline Kael had waited a contemptuous 10 months before dismissing as “trash masquerading as art” in the pages of Harper’s.

Want to read more

Turning up the Gain on AI

shutterstock_523968604

The United States is at war with China. No, it’s not the trade war. It is the war to dominate artificial intelligence, or AI.

Earlier this month, in my blog post, AI on the March, I described the enormous strides China is making in AI. Their progress – and plans for future development of AI – are ambitious and sobering.

The United States isn’t standing still. The Center for a New American Security (CNAS) recently announced the launch of its Task Force on Artificial Intelligence and National Security which will examine how the United States should respond to the national security challenges posed by artificial intelligence. The task force will be chaired by former Deputy Secretary of Defense Robert O. Work, and Dr. Andrew Moore, Dean of the School of Computer Science at Carnegie Mellon University.

The task force will draw together private industry leaders, former senior government officials, and academic experts to take on the challenges of the AI revolution,” said CNAS Senior Fellow Paul Scharre, who will serve as executive director of the AI Task Force. “I am thrilled to have such an impressive roster of national security leaders and artificial intelligence experts join us in this endeavor.”

“We find ourselves on the leading edge of new industrial and military revolutions, powered by AI; machine learning; and autonomous, unmanned systems and robots,” said Secretary Work. “The United States must consider and prepare for the associated national security challenges – whether in cyber-security, surveillance, disinformation, or defense. CNAS’ AI Task Force will help frame the policy issues surrounding these unique challenges.”

Task force Co-Chair Dr. Andrew Moore said that a key tenet of this signature initiative rests in the importance of human judgment. “Central to all of this is ensuring that such systems work with humans in a way which empowers the human, not replaces the human, and which keeps ultimate decision authority with the human. That is why I am so excited by the mission of the task force.”

AI on the March

AIVacuum-sub-articleLarge

Few would dispute the benefits that AI and Machine Learning can convey. AI surrounds us in all we do and impacts more-and-more of our daily life.

American companies like Amazon and Google have done more than anyone to turn A.I. concepts into real products. But for a number of reasons, much of the critical research being done on artificial intelligence is already migrating to other countries, with China poised to take over that leadership role. In July, China unveiled a plan to become the world leader in artificial intelligence and create an industry worth $150 billion to its economy by 2030.

To technologists working on A.I. in the United States, the statement, which was 28 pages long in its English translation, was a direct challenge to America’s lead in arguably the most important tech research to come along in decades. It outlined the Chinese government’s aggressive plan to treat A.I. like the country’s own version of the Apollo 11 lunar mission — an all-in effort that could stoke national pride and spark agenda-setting technology breakthroughs.

The manifesto was also remarkably similar to several reports on the future of artificial intelligence released by the Obama administration at the end of 2016.

“It is remarkable to see how A.I. has emerged as a top priority for the Chinese leadership and how quickly things have been set into motion,” said Elsa Kania, an adjunct fellow at the Center for a New American Security who helped translate the manifesto and follows China’s work on artificial intelligence. “The U.S. plans and policies released in 2016 were seemingly the impetus for the formulation of China’s national A.I. strategy.”

Want more? You can read the full article here.

Our New Rulers

12STATE1-superJumbo

Much ink has been spilled about the enormous, most would say outsize, impact that the biggest technology companies have on our lives.

So much of this commentary has been shrill, so when a thoughtful article on the subject appears, it’s worth highlighting.

Farhad Manjoo nailed it in his piece, “The Frightful Five Want to Rule Entertainment. They Are Hitting Limits.” Here is how he begins:

The tech giants are too big. Other than Donald J. Trump, that’s the defining story of 2017, the meta-narrative lurking beneath every other headline.

The companies I call the Frightful Five — Amazon, Apple, Facebook, Microsoft and Alphabet, Google’s parent company — have experienced astounding growth over the last few years, making them the world’s five most valuable public companies. Because they own the technology that will dominate much of life for the foreseeable future, they are also gaining vast social and political power over much of the world beyond tech.

Now that world is scrambling to figure out what to do about them. And it is discovering that the changes they are unleashing — in the economy, in civic and political life, in arts and entertainment, and in our tech-addled psyches — are not simple to comprehend, let alone to limit.

I’ve spent the last few years studying the rise of these giants. As tensions over their power reached a high boil this summer — Facebook and Russia, Google and sexism, Amazon and Whole Foods — I began thinking more about the nature and consequence of their power, and talking to everyone I could find about these companies. Among them were people in the tech industry, as well as many in other power centers: Washington, Hollywood, the media, the health care and automotive businesses, and other corners of society that may soon be ensnared by one or more of the Five.

Want to read more

Digital World

19sax-master768

Those of us “of a certain age” recall analog. Everything was analog. Then along came digital, and analog and digital coexisted more or less peacefully. Then digital took over.

And digital brought previously-unimaginable benefits and showered us with products we didn’t even know we needed. And with it came help-lines so experts could help us use our devices.

That’s why I found David Sax’s piece “Our Love Affair with Digital is Over,” so on-point and trendsetting, heralding a move back toward analog. Here is part of what he shared:

A decade ago I bought my first smartphone, a clunky little BlackBerry 8830 that came in a sleek black leather sheath. I loved that phone. I loved the way it effortlessly slid in and out of its case, loved the soft purr it emitted when an email came in, loved the silent whoosh of its trackball as I played Brick Breaker on the subway and the feel of its baby keys clicking under my fat thumbs. It was the world in my hands, and when I had to turn it off, I felt anxious and alone.

Like most relationships we plunge into with hearts aflutter, our love affair with digital technology promised us the world: more friends, money and democracy! Free music, news and same-day shipping of paper towels! A laugh a minute, and a constant party at our fingertips.

Many of us bought into the fantasy that digital made everything better. We surrendered to this idea, and mistook our dependence for romance, until it was too late.

Today, when my phone is on, I feel anxious and count down the hours to when I am able to turn it off and truly relax. The love affair I once enjoyed with digital technology is over — and I know I’m not alone.

Ten years after the iPhone first swept us off our feet, the growing mistrust of computers in both our personal lives and the greater society we live in is inescapable. This publishing season is flush with books raising alarms about digital technology’s pernicious effects on our lives: what smartphones are doing to our children; how Facebook and Twitter are eroding our democratic institutions; and the economic effects of tech monopolies.

Want more? You can read the full article here

Success?

Laptop Apple Computer Mac Home Office Office Ipad

Earlier this month, I posted a blog that began: “By almost any measure, the U.S. and the world economy are booming. We seem to have moved well-beyond the 2008 recession and are moving forward on all cylinders.”

And who is leading the pack? Who is not just in the top 1%, but in the top .1%, or even more decimal places to the right? It’s Silicon Valley’s tech billionaires.

Everyone wants to be them, right. Well, maybe not. That’s why I found Nellie Bowles piece, “Soothing the Sting of Success,” so interesting. Here is how the lead-in to the online version began:

“Where Silicon Valley Is Going to Get in Touch With Its Soul: The Esalen Institute, a storied hippie hotel in Big Sur, Calif., has reopened with a mission to help technologists who discover that “inside they’re hurting.”

Who knew?

The article goes on:

Silicon Valley, facing a crisis of the soul, has found a retreat center.

It has been a hard year for the tech industry. Prominent figures like Sean Parker and Justin Rosenstein, horrified by what technology has become, have begun to publicly denounce companies like Facebook that made them rich.

And so Silicon Valley has come to the Esalen Institute, a storied hippie hotel here on the Pacific coast south of Carmel, Calif. After storm damage in the spring and a skeleton crew in the summer, the institute was fully reopened in October with a new director and a new mission: It will be a home for technologists to reckon with what they have built.

This is a radical change for the rambling old center. Founded in 1962, the nonprofit helped bring yoga, organic food and meditation into the American mainstream.

Want more? You can read the full piece.

Artificial Intelligence

26mag-explicableai-image1-articleLarge

Few technologies have had a big an impact – and promise to have more in the future – than artificial intelligence, or AI.

That’s why it was no surprise that the New York Times Magazine featured an article entitled, “Can A.I. Be Taught to Explain Itself.” For me, it was riveting. Some excerpts:

It has become commonplace to hear that machines, armed with machine learning, can outperform humans at decidedly human tasks, from playing Go to playing “Jeopardy!” We assume that is because computers simply have more data-crunching power than our soggy three-pound brains. Kosinski’s results suggested something stranger: that artificial intelligences often excel by developing whole new ways of seeing, or even thinking, that are inscrutable to us. It’s a more profound version of what’s often called the “black box” problem — the inability to discern exactly what machines are doing when they’re teaching themselves novel skills — and it has become a central concern in artificial-intelligence research. In many arenas, A.I. methods have advanced with startling speed; deep neural networks can now detect certain kinds of cancer as accurately as a human. But human doctors still have to make the decisions — and they won’t trust an A.I. unless it can explain itself.

“Artificial intelligence” is a misnomer, an airy and evocative term that can be shaded with whatever notions we might have about what “intelligence” is in the first place. Researchers today prefer the term “machine learning,” which better describes what makes such algorithms powerful.

The idea was to connect leading A.I. researchers with experts in data visualization and human-computer interaction to see what new tools they might invent to find patterns in huge sets of data. There to judge the ideas, and act as hypothetical users, were analysts for the C.I.A., the N.S.A. and sundry other American intelligence agencies.

Even if a machine made perfect decisions, a human would still have to take responsibility for them — and if the machine’s rationale was beyond reckoning, that could never happen.

Intrigued? You can read the full article here

Tech Rising

What do technology and architecture have in common? Your first reaction might be, “not much,” but a closer look at what is happening to the San Francisco skyline might change your mind.

David Streitfeld’s recent piece, “San Francisco’s Skyline, Now Inescapably Transformed by Tech,” features the subtitle: “Salesforce Tower, which at 1,070 feet is the tallest office building west of the Mississippi, will be inhabited in January, signaling tech’s triumph in the city.”

This short piece in The Sunday New York Times Business Section, marks not just an association, but a marriage, between technology and architecture

Streitfeld notes that in Silicon Valley, the office parks blend into the landscape. They might have made their workers exceedingly rich, they might have changed the world — whether for better or worse is currently up for debate — but there is nothing about them that says: We are a big deal.

Skyscrapers tell a different story. They are the pyramids of our civilization, permanent monuments of our existence. They show who is in charge and what they think about themselves. Salesforce Tower is breaking a San Francisco height record that stood for nearly half a century.

Intrigued? You can read the full article here.

AI and You!

merlin_129555845_9af63388-6b04-422c-a297-f879e0d7287d-master768

Few subjects have captured the public’s imagination today more than artificial intelligence (AI) and machine learning. A niche, tech subject just a few years ago, AI has now gone mainstream.

Part of this is because we are surrounded by digital aps like Siri and Cortana inform and entertain us daily (just ask Siri “What is zero divided by zero).

But AI will play a much more profound role in our lives in the future. But we may have to wait for it. Here is part of what Steve Lohr shared recently in a New York Times piece:

There are basically three big questions about artificial intelligence and its impact on the economy: What can it do? Where is it headed? And how fast will it spread?

Three new reports combine to suggest these answers: It can probably do less right now than you think. But it will eventually do more than you probably think, in more places than you probably think, and will probably evolve faster than powerful technologies have in the past.

This bundle of research is itself a sign of the A.I. boom. Researchers across disciplines are scrambling to understand the likely trajectory, reach and influence of the technology — already finding its way into things like self-driving cars and image recognition online — in all its dimensions. Doing so raises a host of challenges of definition and measurement, because the field is moving quickly — and because companies are branding things A.I. for marketing purposes.

An “AI Index,” created by researchers at Stanford University, the Massachusetts Institute of Technology and other organizations, released on Thursday, tracks developments in artificial intelligence by measuring aspects like technical progress, investment, research citations and university enrollments. The goal of the project is to collect, curate and continually update data to better inform scientists, businesspeople, policymakers and the public.

Want more? You can read the full article here

Silicon Valley: Your Friend?

shutterstock_185722835

Almost from its inception, the World Wide Web produced public anxiety — your computer was joined to a network that was beyond your ken and could send worms, viruses and trackers your way — but we nonetheless were inclined to give these earnest innovators the benefit of the doubt. They were on our side in making the web safe and useful, and thus it became easy to interpret each misstep as an unfortunate accident on the path to digital utopia rather than as subterfuge meant to ensure world domination.

Now that Google, Facebook, Amazon have become world dominators, the questions of the hour are, can the public be convinced to see Silicon Valley as the wrecking ball that it is? And do we still have the regulatory tools and social cohesion to restrain the monopolists before they smash the foundations of our society?

By all accounts, these programmers turned entrepreneurs believed their lofty words and were at first indifferent to getting rich from their ideas. A 1998 paper by Sergey Brin and Larry Page, then computer-science graduate students at Stanford, stressed the social benefits of their new search engine, Google, which would be open to the scrutiny of other researchers and wouldn’t be advertising-driven. The public needed to be assured that searches were uncorrupted, that no one had put his finger on the scale for business reasons.

Intrigued? You can read the entire article here