AI and Humans

merlin_151980906_58d03ca7-9863-4f63-83fd-463608d6d89e-superJumbo

We have posted a great deal about artificial intelligence and its impact on us. That’s why I was drawn to a recent article, “Without Humans, A.I Can Wreak Havoc.” Here’s how it begins:

The year 1989 is often remembered for events that challenged the Cold War world order, from the protests in Tiananmen Square to the fall of the Berlin Wall. It is less well remembered for what is considered the birth of the World Wide Web. In March of 1989, the British researcher Tim Berners-Lee shared the protocols, including HTML, URL and HTTP that enabled the internet to become a place of communication and collaboration across the globe.

As the World Wide Web marks its 30th birthday on Tuesday, public discourse is dominated by alarm about Big Tech, data privacy and viral disinformation. Tech executives have been called to testify before Congress, a popular campaign dissuaded Amazon from opening a second headquarters in New York and the United Kingdom is going after social media companies that it calls “digital gangsters.” Implicit in this tech-lash is nostalgia for a more innocent online era.

But longing for a return to the internet’s yesteryears isn’t constructive. In the early days, access to the web was expensive and exclusive, and it was not reflective or inclusive of society as a whole. What is worth revisiting is less how it felt or operated, but what the early web stood for. Those first principles of creativity, connection and collaboration are worth reconsidering today as we reflect on the past and the future promise of our digitized society.

The early days of the internet were febrile with dreams about how it might transform our world, connecting the planet and democratizing access to knowledge and power. It has certainly effected great change, if not always what its founders anticipated. If a new democratic global commons didn’t quite emerge, a new demos certainly did: An internet of people who created it, shared it and reciprocated in its use.

This is just a snippet. Want more? You can read the full article here.

Phone Time

00well_phone_livelonger-superJumbo-v4

Much ink has been spilled talking about how our smart phones can dominate our lives. But can they shorten them? Catherine Price thinks so. Here’s how she began a recent piece:

If you’re like many people, you may have decided that you want to spend less time staring at your phone.

It’s a good idea: an increasing body of evidence suggests that the time we spend on our smartphones is interfering with our sleep, self-esteem, relationships, memory, attention spans, creativity, productivity and problem-solving and decision-making skills.

But there is another reason for us to rethink our relationships with our devices. By chronically raising levels of cortisol, the body’s main stress hormone, our phones may be threatening our health and shortening our lives.

Until now, most discussions of phones’ biochemical effects have focused on dopamine, a brain chemical that helps us form habits — and addictions. Like slot machines, smartphones and apps are explicitly designed to trigger dopamine’s release, with the goal of making our devices difficult to put down.

This manipulation of our dopamine systems is why many experts believe that we are developing behavioral addictions to our phones. But our phones’ effects on cortisol are potentially even more alarming.

Cortisol is our primary fight-or-flight hormone. Its release triggers physiological changes, such as spikes in blood pressure, heart rate and blood sugar, that help us react to and survive acute physical threats.

These effects can be lifesaving if you are actually in physical danger — like, say, you’re being charged by a bull. But our bodies also release cortisol in response to emotional stressors where an increased heart rate isn’t going to do much good, such as checking your phone to find an angry email from your boss.

Want more? You can read it here

High-Tech Weapons

DEW_Article pdf

Earlier this year, I posted blogs regarding the new directions for U.S. National Security embodied in publications such as the National Security Strategy and the National Defense Strategy. Each of these publications notes that the U.S. military must adopt high-technology to ensure the U.S. can deal with increasingly capable peer competitors.

The era of United States technological dominance has ended. Indeed, in many areas, including military technology, this gap has narrowed to parity or near-parity, and potential adversaries have all-but erased what was once the U.S. military’s trump card—superior technology. Nations such as Russia and China, as well as countries to which these nations proliferate weapons, are deploying advanced weapons that demonstrate many of the same technological strengths that have traditionally provided the high-tech basis for U.S. advantage.

One of the most promising emerging military technologies is directed-energy weapons. The U.S. military already uses many directed-energy systems such as laser range finders and targeting systems are deployed on tanks, helicopters, tactical fighters and sniper rifles. These laser systems provide both swifter engagements and greatly enhanced precision by shortening of the sensor-to-shooter cycle.

Now, directed-energy weapons are poised to shorten––often dramatically––the shooter-to-target cycle. Directed-energy weapons provide a means for instantaneous target engagement, with extremely high accuracy and at long ranges.

This is just a snippet. Want more? You can read the full article here

Internet Cleanse

28brooksWeb-superJumbo

Among the most popular diet ideas are those that involve “cleansing” of some type. While there are many ways to do this, most involve the need to stop eating what you currently eat.

I’ve always wondered if we could – and should – apply that to our lives on the internet. I know it internally, but couldn’t articulate it. Thankfully, David Brooks did. Here is part of what he said:

The two most recent times I saw my friend Makoto Fujimura, he put a Kintsugi bowl in my hands. These ceramic bowls were 300 to 400 years old. But what made them special was that somewhere along the way they had broken into shards and were glued back together with a 15th-century technique using Japanese lacquer and gold.

I don’t know about you, but I feel a great hunger right now for timeless pieces like these. The internet has accelerated our experience of time, and Donald Trump has upped the pace of events to permanent frenetic.

There is a rapid, dirty river of information coursing through us all day. If you’re in the news business, or a consumer of the news business, your reaction to events has to be instant or it is outdated. If you’re on social media, there are these swarming mobs who rise out of nowhere, leave people broken and do not stick around to perform the patient Kintsugi act of gluing them back together.

Probably like you, I’ve felt a great need to take a break from this pace every once in a while and step into a slower dimension of time. Mako’s paintings are very good for these moments.

What would it mean to live generationally once in a while, in a world that now finds the daily newspaper too slow?

Want more? You can read it here

Who Codes

17mag-coders-pics-slide-ELU0-articleLarge

In his best-selling book, The Innovators, Walter Isaacson, traces to roots of the computer revolution to mathematician Ada Lovelace, Lord Byron’s daughter, who lived from 1815-1852.

His history moves into the twentieth century and highlights another female computer pioneer, Grace Hooper.

And it is well known that women were some of earliest coders and software pioneers over a half-century ago.

But where are female coders today? I’d always wondered, until I read Clive Thompson’s recent piece, “The Secret History of Women in Coding.” Here’s how he begins:

As a teenager in Maryland in the 1950s, Mary Allen Wilkes had no plans to become a software pioneer — she dreamed of being a litigator. One day in junior high in 1950, though, her geography teacher surprised her with a comment: “Mary Allen, when you grow up, you should be a computer programmer!” Wilkes had no idea what a programmer was; she wasn’t even sure what a computer was. Relatively few Americans were. The first digital computers had been built barely a decade earlier at universities and in government labs.

By the time she was graduating from Wellesley College in 1959, she knew her legal ambitions were out of reach. Her mentors all told her the same thing: Don’t even bother applying to law school. “They said: ‘Don’t do it. You may not get in. Or if you get in, you may not get out. And if you get out, you won’t get a job,’ ” she recalls. If she lucked out and got hired, it wouldn’t be to argue cases in front of a judge. More likely, she would be a law librarian, a legal secretary, someone processing trusts and estates.

But Wilkes remembered her junior high school teacher’s suggestion. In college, she heard that computers were supposed to be the key to the future. She knew that the Massachusetts Institute of Technology had a few of them. So on the day of her graduation, she had her parents drive her over to M.I.T. and marched into the school’s employment office. “Do you have any jobs for computer programmers?” she asked. They did, and they hired her.

It might seem strange now that they were happy to take on a random applicant with absolutely no experience in computer programming. But in those days, almost nobody had any experience writing code. The discipline did not yet really exist; there were vanishingly few college courses in it, and no majors. (Stanford, for example, didn’t create a computer-science department until 1965.) So instead, institutions that needed programmers just used aptitude tests to evaluate applicants’ ability to think logically. Wilkes happened to have some intellectual preparation: As a philosophy major, she had studied symbolic logic, which can involve creating arguments and inferences by stringing together and/or statements in a way that resembles coding. Want more? You can read it here

Facebook

03Bissell-jumbo

It’s been about 5 years since Jim Cramer and Bob Lang coined the acronym “FANG” for mega-cap high growth stocks Facebook, Amazon, Netflix and Alphabet Google.

And while it just happens to lead a handy acronym, Facebook is quite possibly the most controversial tech company of all time.

For most, this is due to one person, Facebook’s CEO, Mark Zuckerberg. It has been almost a decade since the movie about Facebook and its founder, The Social Network, hit with such force.

We remain fascinated by Facebook and Zuckerberg. We want to learn more, but we want something different. That’s why I was drawn in by a book review for “Zucked.” Here’s how it begins:

The dystopia George Orwell conjured up in “1984” wasn’t a prediction. It was, instead, a reflection. Newspeak, the Ministry of Truth, the Inner Party, the Outer Party — that novel sampled and remixed a reality that Nazi and Soviet totalitarianism had already made apparent. Scary stuff, certainly, but maybe the more frightening dystopia is the one no one warned you about, the one you wake up one morning to realize you’re living inside.

Roger McNamee, an esteemed venture capitalist, would appear to agree. “A dystopian technology future overran our lives before we were ready,” he writes in “Zucked.” Think that sounds like overstatement? Let’s examine the evidence. At its peak the planet’s fourth most valuable company, and arguably its most influential, is controlled almost entirely by a young man with the charisma of a geometry T.A. The totality of this man’s professional life has been running this company, which calls itself “a platform.”

Company, platform — whatever it is, it provides a curious service wherein billions of people fill it with content: baby photos, birthday wishes, concert promotions, psychotic premonitions of Jewish lizard-men. No one is paid by the company for this labor; on the contrary, users are rewarded by being tracked across the web, even when logged out, and consequently strip-mined by a complicated artificial intelligence trained to sort surveilled information into approximately 29,000 predictive data points, which are then made available to advertisers and other third parties, who now know everything that can be known about a person without trepanning her skull. Amazingly, none of this is secret, despite the company’s best efforts to keep it so. Somehow, people still use and love this platform. Want more? You can read the full article here

Our World – Our Minds?

08Herrman-jumbo

It’s not much of a stretch to say that stories about big tech have dominated the headlines in recent years. We’ve all read them – and many of them are less-than-flattering.

That’s why I gravitated to a new book: “World Without Mind: The Existential Threat of Big Tech.” While I have my own – strong – opinions about what the big says, I found John Herrman’s review of the book clarifying in explaining the import of this book. Here is how he begins:

The technology critic is typically a captive figure, beholden either to a sorrowful past, a panicked present or an arrogant future. In his proudest moments, he resembles something like a theorist of transformation, decline and creation. In his lowest, he is more like a speaking canary, prone to prophecy, a game with losing odds. His attempts at optimism are framed as counterintuitive, faring little better, in predictive terms, than his lapses into pessimism. He teeters hazardously between implicating his audience and merely giving their anxieties a name. He — and it is almost always a he — is the critical equivalent of an unreliable narrator, unable to write about technology without also writing about himself. Occasionally, he is right: about what is happening, about what should happen, and about what it means. And so he carries on, and his audience with him.

Franklin Foer, thankfully, recognizes these pitfalls even if he can’t always avoid them. Who can? The melodramatically titled “World Without Mind,” Foer’s compact attempt at a broad technological polemic — which identifies the stupendous successes of Amazon, Google and Facebook, among others, as an “existential threat” to the individual and to society — begins with a disclaimer. Foer’s tumultuous stint editing The New Republic under the ownership of the Facebook co-founder Chris Hughes ended with mass resignations and public acrimony. “There’s no doubt that this experience informs the argument of this book,” he writes. He is likewise entangled through his proximity to publishing: The author’s friends, colleagues and immediate family members — including his brother, the novelist Jonathan Safran Foer — depend to different degrees on the industry Amazon consumed first. The book is dedicated to his father, Bert, a crusading antitrust lawyer.

In this slightly crouched posture, and with a hint of healthy self-doubt, Foer proceeds quickly. We, the consuming public, have failed to properly understand the new tech superpowers, he suggests, leaving little hope for stodgy and reluctant American regulators. The scope of their influence is obscured by the sheer number of things they do and sell, or problems they purport to be solving, and by our outdated sense of what constitutes a monopoly. To that end, Foer promotes the concept of the “knowledge monopoly,” which he qualifies with a mischievous grin. “My hope is that we revive ‘monopoly’ as a core piece of political rhetoric that broadly denotes dominant firms with pernicious powers,” he says, rather than as a “technical” term referring to one company cornerning a market. (His new monopolists, after all, aren’t raising prices. They’re giving things away free). Want more? You can read the full article here

One Trillion

merlin_139066044_39eb8d46-dac6-4f4b-a912-41b3c035374c-jumbo

The stock market – especially tech – has been down a bit, and it’s easy to forget that not that long ago Apple was valued at one trillion dollars.

There has been a great deal of breathless reporting on this milestone, but much less thoughtful analysis. That’s why I was taken by Jack Nicas’ piece. Here’s how he began:

SAN FRANCISCO — In 1997, Apple was on the ropes. The Silicon Valley pioneer was being decimated by Microsoft and its many partners in the personal-computer market. It had just cut a third of its work force, and it was about 90 days from going broke, Apple’s late co-founder, Steve Jobs, later said.

Recently, Apple became the first publicly traded American company to be worth more than $1 trillion when its shares climbed 3 percent to end the day at $207.39. The gains came two days after the company announced the latest in a series of remarkably profitable quarters.

Apple’s ascent from the brink of bankruptcy to the world’s most valuable public company has been a business tour de force, marked by rapid innovation, a series of smash-hit products and the creation of a sophisticated, globe-spanning supply chain that keeps costs down while producing enormous volumes of cutting-edge devices.

That ascent has also been marked by controversy, tragedy and challenges. Apple’s aggressive use of outside manufacturers in China, for example, has led to criticism that it is taking advantage of poorly paid workers in other countries and robbing Americans of good manufacturing jobs. The company faces numerous questions about how it can continue to grow. This is just a snippet. Want more? You can read the full article here.

Why Gig?

19hyman-jumbo

As you hold your smart phone and consider how it has changed your life, you could be inclined to think that the tech industry alone has created the gig economy. But you would be wrong.

The gig economy is enabled by technology, but technology didn’t create it, it was a result of the insecure nature of work today – which is a far cry from baby-boomers’ parents who went to work for one company and retired at 65 with their gold watch.

I read one of the best explanations of this change in piece entitled: “The Gig Economy Isn’t the iPhone’s Fault. Here’s how it began:

When we learn about the Industrial Revolution in school, we hear a lot about factories, steam engines, maybe the power loom. We are taught that technological innovation drove social change and radically reshaped the world of work.

Likewise, when we talk about today’s economy, we focus on smartphones, artificial intelligence, apps. Here, too, the inexorable march of technology is thought to be responsible for disrupting traditional work, phasing out the employee with a regular wage or salary and phasing in independent contractors, consultants, temps and freelancers — the so-called gig economy.

But this narrative is wrong. The history of labor shows that technology does not usually drive social change. On the contrary, social change is typically driven by decisions we make about how to organize our world. Only later does technology swoop in, accelerating and consolidating those changes.

This insight is crucial for anyone concerned about the insecurity and other shortcomings of the gig economy. For it reminds us that far from being an unavoidable consequence of technological progress, the nature of work always remains a matter of social choice. It is not a result of an algorithm; it is a collection of decisions by corporations and policymakers.

Want more? You can read the full article here

Tech Anniversary

shutterstock_1135450259

San Francisco in 1968 was littered with flower children, free love and dreams of utopia encapsulated in Timothy Leary’s exhortation: “Turn on, tune in, drop out.” How wrong that was! But out of this purple haze rose that year’s Joint Computer Conference, where an assembly of geniuses wearing white short-sleeved shirts and pocket protectors convened 50 years ago this week. The event shined a guiding light on the path to personal computing and set the modern world in motion.

On Dec. 9, 1968, Doug Engelbart of the Stanford Research Institute presented what’s now known as “The Mother of All Demos.” Using a homemade modem, a video feed from Menlo Park, and a quirky hand-operated device, Engelbart gave a 90-minute demonstration of hypertext, videoconferencing, teleconferencing and a networked operating system. Oh, and graphical user interface, display editing, multiple windows, shared documents, context-sensitive help and a digital library. Mother of all demos is right. That quirky device later became known as the computer mouse. The audience felt as if it had stepped into Oz, watching the world transform from black-and-white to color. But it was no hallucination.

So what have we learned in 50 years? First, augmenting humans is the purpose of technology and ought not be feared. Engelbart described the possibilities in a 1970 paper. “There will emerge a new ‘marketplace,’ representing fantastic wealth in commodities of knowledge, service, information, processing, storage,” he predicted. “In the number and range of transactions, and in the speed and flexibility with which they are negotiated, this new market will have a vitality and dynamism as much greater than today’s as today’s is greater than the village market.” Today Google is Memex 1.0, while Amazon and a whole world of e-commerce have realized the digital market.