Social Media

merlin_139394538_bd3c11a4-053f-4774-a1cb-8c43affb22bd-superJumbo

Much ink has been spilled regarding how much social media impacts our lives – much of it shrill. That’s why I was taken in by a recent piece, “Tweeting Into the Abyss.” The writer reviews Jaron Lanier’s book: “Ten Arguments for Deleting Your Social Media Accounts Right Now.” If that doesn’t get your attention, what will? Here’s how it begins:

My self-justifications were feeble. They could be described as hypocritical even. I had written a book denouncing Facebook, yet maintained an account on Mark Zuckerberg’s manipulation machine. Despite my comprehensive awareness of the perils, I would occasionally indulge in the voyeurism of the News Feed, succumb to zombie scrolling and would take the hit of dopamine that Sean Parker, Facebook’s founding president, has admitted is baked into the product. In internal monologues, I explained my behavior as a professional necessity. How could I describe the perniciousness of the platform if I never used it?

Critics of the big technology companies have refrained from hectoring users to quit social media. It’s far more comfortable to slam a corporate leviathan than it is to shame your aunt or high school pals — or, for that matter, to jettison your own long list of “friends.” As our informational ecosystem has been rubbished, we have placed very little onus on the more than two billion users of Facebook and Twitter. So I’m grateful to Jaron Lanier for redistributing blame on the lumpen-user, for pressing the public to flee social media. He writes, “If you’re not part of the solution, there will be no solution.”

Want more? You can read the full article here

We Like Us

merlin_139618485_c688a36d-9760-4c2a-82c2-02f868015445-superJumbo

One of the things most people agree on is that high self-esteem is good, and low self-esteem is bad. Most of us more-or-less accept that “truth.”

That’s why I was quite taken by the review of “Selfie” a book that tries to get at the root of how we’ve gone from just having self-esteem to being self-obsessed. Here’s how it begins:

Worrying about one’s own narcissism has a whiff of paradox. If we are suffering from self-obsession, should we really feed the disease by poring over another book about ourselves? Well, perhaps just one more.

“Selfie: How We Became So Self-Obsessed and What It’s Doing to Us,” by Will Storr, a British reporter and novelist, is an intriguing odyssey of self-discovery, in two senses. First, it tells a personal tale. Storr confesses to spending much of his time in a state of self-loathing and he would like to know why. On a quest to explore self-esteem and its opposite, he interviews all sorts of people, from CJ, a young American woman whose life revolves around snapping, processing and posting hundreds of thousands of selfies, to John, a vicious London gangster who repented of his selfish ways, possibly because of his mother’s prayers to St. Jude. Storr takes part in encounter groups in California, grills a Benedictine monk cloistered at Pluscarden Abbey in Scotland, and gets academic psychologists to chat frankly about their work. Storr’s side of the conversations he recounts tends to be blunt, inquisitive and peppered with salty British swearing. One comes to like him, even if he does not often like himself.

Want more? You can read the full article here

Battling Moguls – Killer Robots

10MUSK-articleLarge

Earlier this month I posted a blog entry regarding one of the most controversial issues at the nexus of technology and national security is concerns regarding the “militarization” of artificial intelligence – AI.

Initially an issue consigned to just a few defense-related publications and websites, it has now moved front and center. Some of what is said is shrill, but some if far less so.

That’s why I was taken by a piece in the New York Times entitled:

Mark Zuckerberg, Elon Musk and the Feud Over Killer Robots

with the subtitle

As the tech moguls disagree over the risks presented by something that doesn’t exist yet, all of Silicon Valley is learning about unintended consequences of A.I.

Here’s how it begins:

Mark Zuckerberg thought his fellow Silicon Valley billionaire Elon Musk was behaving like an alarmist.

Mr. Musk, the entrepreneur behind SpaceX and the electric-car maker Tesla, had taken it upon himself to warn the world that artificial intelligence was “potentially more dangerous than nukes” in television interviews and on social media

So, on Nov. 19, 2014, Mr. Zuckerberg, Facebook’s chief executive, invited Mr. Musk to dinner at his home in Palo Alto, Calif. Two top researchers from Facebook’s new artificial intelligence lab and two other Facebook executives joined them.

As they ate, the Facebook contingent tried to convince Mr. Musk that he was wrong. But he wasn’t budging. “I genuinely believe this is dangerous,” Mr. Musk told the table, according to one of the dinner’s attendees, Yann LeCun, the researcher who led Facebook’s A.I. lab.

Mr. Musk’s fears of A.I., distilled to their essence, were simple: If we create machines that are smarter than humans, they could turn against us. (See: “The Terminator,” “The Matrix,” and “2001: A Space Odyssey.”) Let’s for once, he was saying to the rest of the tech industry, consider the unintended consequences of what we are creating before we unleash it on the world.

Neither Mr. Musk nor Mr. Zuckerberg would talk in detail about the dinner, which has not been reported before, or their long-running A.I. debate.

The creation of “superintelligence” — the name for the supersmart technological breakthrough that takes A.I. to the next level and creates machines that not only perform narrow tasks that typically require human intelligence (like self-driving cars) but can actually outthink humans — still feels like science fiction. But the fight over the future of A.I. has spread across the tech industry.

You can read the full article here

AI and National Security

merlin_125786690_63c7d7ce-6111-4a5c-9dc9-d42189e3b937-superJumbo

One of the most controversial issues at the nexus of technology and national security is concerns regarding the “militarization” of artificial intelligence – AI.

While this has been an issue for some time, it recently grabbed banner headlines regarding the issue of Google’s support for a Pentagon initiative called “Project Maven.”

The company’s relationship with the Defense Department since it won a share of the contract for the Maven program, which uses artificial intelligence to interpret video images and could be used to improve the targeting of drone strikes, has touched off an existential crisis, according to emails and documents reviewed by The Times as well as interviews with about a dozen current and former Google employees.

Google, hoping to head off a rebellion by employees upset that the technology they were working on could be used for lethal purposes, will not renew a contract with the Pentagon for artificial intelligence work when a current deal expires next year.

But it is not unusual for Silicon Valley’s big companies to have deep military ties. And the internal dissent over Maven stands in contrast to Google’s biggest competitors for selling cloud-computing services — Amazon.com and Microsoft — which have aggressively pursued Pentagon contracts without pushback from their employees.

Expect this issue to remain controversial as the U.S. military faces increasingly capable foes and as AI and machine learning offer ways to help our warfighters prevail.

You can read these two articles here and here

We Have An App

BN-QX082_bkrvla_JV_20161121163854

Few writers have as much of a knack for taking difficult subjects – especially technology – and making them understandable for the lay person. Tom Friedman is one of those people.

I read his book, “Thank You for Being Late” some time ago, and found it interesting and enlightening. However, I never really felt I was able to capture succinctly just what the book was about. Then I came across an old review of the book in the Wall Street Journal. Here’s how it began:

Change is nothing new. Nobel laureate Bob Dylan sang that the times they were a-changin’ back in 1964. What has changed is the pace of change: “The three largest forces on the planet—technology, globalization, and climate change—are all accelerating at once,” notes New York Times columnist Thomas L. Friedman in “Thank You for Being Late: An Optimist’s Guide to Thriving in the Age of Accelerations.” Gradual change allows for adaptation; one generation figures out trains, another airplanes. Now, in a world where taxi-cab regulators will figure out Uber just in time for self-driving cars to render such services obsolete, “so many aspects of our societies, workplaces, and geopolitics are being reshaped and need to be reimagined.” All of it creates a sense of discomfort and provokes backlash—witness Brexit and the American presidential election. Yet there is cause for optimism, Mr. Friedman believes. Humans are crafty creatures.

In this book, Mr. Friedman tries to press pause. The title comes from the author’s exclamation to a tardy breakfast companion: The unexpected downtime had given him an opportunity to reflect. If we all take such time to think, he claims, we can figure out how to “dance in a hurricane.” It’s a comforting idea, though one wonders why, if Mr. Friedman was so happy for this pre-breakfast downtime, he was busily scheduling daily breakfast meetings in the first place. Likewise, this ambitious book, while compelling in places, skips about a lot. His attempt to cover much of the history of modern technology, for instance, quickly descends into gee-whiz moments and ubiquitous exclamation points. Big-belly garbage cans have sensors that wirelessly announce when they need to be emptied, and so Mr. Friedman marvels that “yes, even the garbageman is a tech worker now. . . . That garbage can could take an SAT exam!”

Want to read more

Tech and the Military

04GOOGLE-master768-v4

What fuels the U.S. military today isn’t hardware, but software. And it’s not just the kind of software you use on your home computer or your video games.

Today’s military arms race involves artificial intelligence and machine learning. And the U.S. companies leading that effort are the big tech companies: Alphabet, Google, Facebook and others.

The U.S. military has gone to these companies for one reason – so our warfighters have an edge against an adversary.

It was almost inevitable that challenges would come up from this uneasy marriage – and now they have.

Here is how a recent article, “A Google Military Project Fuels Internal Dissent,” begins, and this may just be the tip of iceberg:

Thousands of Google employees, including dozens of senior engineers, have signed a letter protesting the company’s involvement in a Pentagon program that uses artificial intelligence to interpret video imagery and could be used to improve the targeting of drone strikes.

The letter, which is circulating inside Google and has garnered more than 3,100 signatures, reflects a culture clash between Silicon Valley and the federal government that is likely to intensify as cutting-edge artificial intelligence is increasingly employed for military purposes.

“We believe that Google should not be in the business of war,” says the letter, addressed to Sundar Pichai, the company’s chief executive. It asks that Google pull out of Project Maven, a Pentagon pilot program, and announce a policy that it will not “ever build warfare technology.”

You can read the full review here

Too Big?

25mag-google4-cov-articleLarge

Over the past several weeks, Facebook has dominated the news, with her CEO testifying on Capitol Hill in front of angry lawmakers.

But another tech firm is under the same – even greater scrutiny – in the same way large mega-companies have been for most of our country’s recorded history.

Critics say the search giant is squelching competition before it begins. Should the government step in? Charles Duhigg sheds some light. Here is part of what he says:

Google has succeeded where Genghis Khan, communism and Esperanto all failed: It dominates the globe. Though estimates vary by region, the company now accounts for an estimated 87 percent of online searches worldwide. It processes trillions of queries each year, which works out to at least 5.5 billion a day, 63,000 a second. So odds are good that sometime in the last week, or last hour, or last 10 minutes, you’ve used Google to answer a nagging question or to look up a minor fact, and barely paused to consider how near-magical it is that almost any bit of knowledge can be delivered to you faster than you can type the request. If you’re old enough to remember the internet before 1998, when Google was founded, you’ll recall what it was like when searching online involved AltaVista or Lycos and consistently delivered a healthy dose of spam or porn. (Pity the early web enthusiasts who innocently asked Jeeves about “amateurs” or “steel.”)

In other words, it’s very likely you love Google, or are at least fond of Google, or hardly think about Google, the same way you hardly think about water systems or traffic lights or any of the other things you rely on every day. Therefore you might have been surprised when headlines began appearing last year suggesting that Google and its fellow tech giants were threatening everything from our economy to democracy itself. Lawmakers have accused Google of creating an automated advertising system so vast and subtle that hardly anyone noticed when Russian saboteurs co-opted it in the last election. Critics say Facebook exploits our addictive impulses and silos us in ideological echo chambers. Amazon’s reach is blamed for spurring a retail meltdown; Apple’s economic impact is so profound it can cause market-wide gyrations. These controversies point to the growing anxiety that a small number of technology companies are now such powerful entities that they can destroy entire industries or social norms with just a few lines of computer code. Those four companies, plus Microsoft, make up America’s largest sources of aggregated news, advertising, online shopping, digital entertainment and the tools of business and communication. They’re also among the world’s most valuable firms, with combined annual revenues of more than half a trillion dollars.

Want more? You can read the full piece here

2001 at 50

BN-XU230_KUBRIC_M_20180308174915

Any votes for the most prescient film of the last century? One that looked ahead to a future that most could only dimly perceive.

My vote is for Stanly Kubrick’s “2001: A Space Odyssey.” Forward-looking only begins to describe this work. Here is how Michael Benson begins his piece in the Wall Street Journal:

Fifty years ago, invitation-only audiences gathered in specially equipped Cinerama theaters in Washington, New York and Los Angeles to preview a widescreen epic that director Stanley Kubrick had been working on for four years. Conceived in collaboration with the science-fiction writer Arthur C. Clarke, “2001: A Space Odyssey” was way over budget, and Hollywood rumor held that MGM had essentially bet the studio on the project.

The film’s previews were an unmitigated disaster. Its story line encompassed an exceptional temporal sweep, starting with the initial contact between pre-human ape-men and an omnipotent alien civilization and then vaulting forward to later encounters between Homo sapiens and the elusive aliens, represented throughout by the film’s iconic metallic-black monolith. Although featuring visual effects of unprecedented realism and power, Kubrick’s panoramic journey into space and time made few concessions to viewer understanding. The film was essentially a nonverbal experience. Its first words came only a good half-hour in.

Audience walkouts numbered well over 200 at the New York premiere on April 3, 1968, and the next day’s reviews were almost uniformly negative. Writing in the Village Voice, Andrew Sarris called the movie “a thoroughly uninteresting failure and the most damning demonstration yet of Stanley Kubrick’s inability to tell a story coherently and with a consistent point of view.” And yet that afternoon, a long line—comprised predominantly of younger people—extended down Broadway, awaiting the first matinee.

Stung by the initial reactions and under great pressure from MGM, Kubrick soon cut almost 20 minutes from the film. Although “2001” remained willfully opaque and open to interpretation, the trims removed redundancies, and the film spoke more clearly. Critics began to come around. In her review for the Boston Globe, Marjorie Adams, who had seen the shortened version, called it “the world’s most extraordinary film. Nothing like it has ever been shown in Boston before, or for that matter, anywhere. The film is as exciting as the discovery of a new dimension in life.”

Fifty years later, “2001: A Space Odyssey” is widely recognized as ranking among the most influential movies ever made. The most respected poll of such things, conducted every decade by the British Film Institute’s Sight & Sound magazine, asks the world’s leading directors and critics to name the 100 greatest films of all time. The last BFI decadal survey, conducted in 2012, placed it at No. 2 among directors and No. 6 among critics. Not bad for a film that critic Pauline Kael had waited a contemptuous 10 months before dismissing as “trash masquerading as art” in the pages of Harper’s.

Want to read more

Turning up the Gain on AI

shutterstock_523968604

The United States is at war with China. No, it’s not the trade war. It is the war to dominate artificial intelligence, or AI.

Earlier this month, in my blog post, AI on the March, I described the enormous strides China is making in AI. Their progress – and plans for future development of AI – are ambitious and sobering.

The United States isn’t standing still. The Center for a New American Security (CNAS) recently announced the launch of its Task Force on Artificial Intelligence and National Security which will examine how the United States should respond to the national security challenges posed by artificial intelligence. The task force will be chaired by former Deputy Secretary of Defense Robert O. Work, and Dr. Andrew Moore, Dean of the School of Computer Science at Carnegie Mellon University.

The task force will draw together private industry leaders, former senior government officials, and academic experts to take on the challenges of the AI revolution,” said CNAS Senior Fellow Paul Scharre, who will serve as executive director of the AI Task Force. “I am thrilled to have such an impressive roster of national security leaders and artificial intelligence experts join us in this endeavor.”

“We find ourselves on the leading edge of new industrial and military revolutions, powered by AI; machine learning; and autonomous, unmanned systems and robots,” said Secretary Work. “The United States must consider and prepare for the associated national security challenges – whether in cyber-security, surveillance, disinformation, or defense. CNAS’ AI Task Force will help frame the policy issues surrounding these unique challenges.”

Task force Co-Chair Dr. Andrew Moore said that a key tenet of this signature initiative rests in the importance of human judgment. “Central to all of this is ensuring that such systems work with humans in a way which empowers the human, not replaces the human, and which keeps ultimate decision authority with the human. That is why I am so excited by the mission of the task force.”

AI on the March

AIVacuum-sub-articleLarge

Few would dispute the benefits that AI and Machine Learning can convey. AI surrounds us in all we do and impacts more-and-more of our daily life.

American companies like Amazon and Google have done more than anyone to turn A.I. concepts into real products. But for a number of reasons, much of the critical research being done on artificial intelligence is already migrating to other countries, with China poised to take over that leadership role. In July, China unveiled a plan to become the world leader in artificial intelligence and create an industry worth $150 billion to its economy by 2030.

To technologists working on A.I. in the United States, the statement, which was 28 pages long in its English translation, was a direct challenge to America’s lead in arguably the most important tech research to come along in decades. It outlined the Chinese government’s aggressive plan to treat A.I. like the country’s own version of the Apollo 11 lunar mission — an all-in effort that could stoke national pride and spark agenda-setting technology breakthroughs.

The manifesto was also remarkably similar to several reports on the future of artificial intelligence released by the Obama administration at the end of 2016.

“It is remarkable to see how A.I. has emerged as a top priority for the Chinese leadership and how quickly things have been set into motion,” said Elsa Kania, an adjunct fellow at the Center for a New American Security who helped translate the manifesto and follows China’s work on artificial intelligence. “The U.S. plans and policies released in 2016 were seemingly the impetus for the formulation of China’s national A.I. strategy.”

Want more? You can read the full article here.