Real Things

06booksax-blog427

We live in a digital world. But analog is making a comeback. Some of us sense it. Some of us feel it, but as Michiko Kakutani explains in his review of David Sax’s book, The Revenge of Analog, there are compelling reasons for this. He begins by saying, as Stephen King once wrote, “Sooner or later, everything old is new again,”

Here’s part of what’s in this killer-good review:

“In his captivating new book, “The Revenge of Analog,” the reporter David Sax provides an insightful and entertaining account of this phenomenon, creating a powerful counternarrative to the techno-utopian belief that we would live in an ever-improving, all-digital world. Mr. Sax argues that analog isn’t going anywhere, but is experiencing a bracing revival that is not just a case of nostalgia or hipster street cred, but something more complex.”

“Analog experiences can provide us with the kind of real-world pleasures and rewards digital ones cannot,” he writes, and “sometimes analog simply outperforms digital as the best solution.” Pen and paper can give writers and designers a direct means of sketching out their ideas without the complicating biases of software, while whiteboards can bring engineers “out from behind their screens” and entice them “to take risks and share ideas with others.”

You can read the complete review here

Our Online World

0911-bks-ronson-blog427

Who are you? Are there two yous? Increasingly, we have two personas, one we exhibit in person and an entirely different one. This is something that is all new in our society, and something brought on by the internet. Our personas and our behavior does change when we go online. A ground-breaking book helps us understand way.

Jon Ronson titles his review of Mary Akind’s book, “The Cyber Effect,” “Offline Jekylls, Online Hydes,” and it is apt, because while in person we typically adhere to what we feel “polite society” demands, offline we often are different people, sometimes people who are unrecognizable to our family and close friends. Here is part of what Ronson offers:

“This is her provocative and at times compelling thesis: The internet — “the largest unregulated social experiment of all time,” in the words of the clinical psychologist Michael Seto — is turning us, as a species, more mentally disordered, anxious, obsessive, narcissistic, exhibitionist, body dysmorphic, psychopathic, schizophrenic. All this might unleash a “surge in deviant, criminal and abnormal behavior in the general population.” We check our mobile devices 1,500 times a week, sometimes even secretly, before the plane’s pilot tells us it’s safe. Our ethics have become so impaired that some of us take selfies in front of people threatening to jump from bridges. (Having spent years with people disproportionately shamed on social media for some minor transgression, I can attest to how the internet can rob people of empathy.)”

“She paints an evocative image of sitting on a train to Galway, watching a woman breast-feed her baby: “The baby was gazing foggily upward . . . looking adoringly at the mother’s jaw, as the mother continued to gaze adoringly at her device.” How will such a seemingly tiny behavioral shift like less eye contact between mother and baby play out over time? Aiken asks. “This small and simple thing, millions of babies around the world getting less eye contact and less one-on-one attention, could result in an evolutionary blip. Yes, I said it. Evolutionary blip. Less eye contact could change the course of human civilization.””

Thought provoking? You can read the full article here

Mr. Moore’s “Law”

mainframe computer

Who could have predicted where technology – and especially where the high-technology that makes it possible for us to have cell phones and tablets and surf the internet – would take us today. Most of us would have lost our shirts if we placed wagers on technology’s future trajectory – but it wouldn’t because we had wild dreams, but because our view was so limited.

One pioneer did have that vision – over a half-century ago. In 1965, Gordon Moore gave us what we refer to today as Moore’s Law. Here’s part of what Michael Malone had to say in the Wall Street Journal article:

Fifty years ago, on April 19, 1965, chemist and reluctant entrepreneur Gordon E. Moore set out to graph the rapid rate of improvement in semiconductor-chip performance—and ended up discovering the heartbeat of the modern world.

That discovery is what became known as “Moore’s Law,” which is the observation that performance (speed, price, size) of integrated circuits, aka microchips, regularly doubled every 18 months. The graph began as an illustration to an article in Electronics magazine, and it didn’t acquire the name Moore’s Law” for another decade. And for a decade after that it remained a topic of interest mostly inside the semiconductor industry.

Moore’s Law became iconic not because of its novelty or through media promotion, but because it has proved to be the most effective predictive tool of new chip generations, technology innovation and even social and cultural changes of the last half-century. It achieved all this despite doubts about its durability even by Mr. Moore and the fact that it isn’t really a scientific law.

Yet against all odds and regular predictions of its imminent demise, Moore’s Law endures. The historic doubling of chip performance every 18 months may have slowed to half that pace today, but even pessimists now accept that we likely will live under its regime well into the next decade and beyond.

Read more about how this “law” has changed our lives here.

Digital World

17BOOKKOBEK-master180

Do we control our digital world, or does someone else. It seems that we have choices, but do we really?

I recently read I Hate the Internet. Whew. It really made me think about the subject of technology and our lives. And it’s a NOVEL.

If you don’t have time to read the entire book, I’ve posted a link to a New York Times review below.

In his new novel, “I Hate the Internet,” Jarett Kobek performs a similar maneuver on the viscera of the American psyche, at least as regards the so-called information highway. I can’t decide if, on his way down, Mr. Kobek is laughing or weeping.

Here is a key thought. One of the curious aspects of the 21st century was the great delusion amongst many people, particularly in the San Francisco Bay Area, that freedom of speech and freedom of expression were best exercised on technology platforms owned by corporations dedicated to making as much money as possible.

Read more here

More Than the Internet

02coder-web-blog427-v3

Legions of companies, CEOs, tech workers and even laypeople are looking for “the next big thing in technology. We spend time and considerable money at retreats, seminars and the like to try to find that new, new thing. For some, it is an obsession, for others an avocation, and for most of us, at least a curiosity. For me, I try to say plugged into what Adam Bosworth is doing.

If you have sent email on Google or used Microsoft’s browser or databases, you have touched the technology handiwork of Adam Bosworth. He has a penchant for being intimately involved in the creation of generations of widely used technology.

While it is never easy to predict what the next big thing will be, identifying what Mr. Bosworth is working on is always good for clues. Right now, along with competitors at companies like Amazon and Google, he is building what some call a “data singularity.”

Imagine if almost everything — streets, car bumpers, doors, hydroelectric dams — had a tiny sensor. That is already happening through so-called Internet-of-Things projects run by big companies like General Electric and IBM.

Think of it as one, enormous process in which machines gather information, learn and change based on what they learn – all in seconds.

Read more about the next big thing(s) here:

Moore’s Law – Onward!

27-MOORE-master675

I’ve used this blog to talk about technology and to see where technology is pushing the edges of the envelope. Technology continues to dazzle us. But as Arthur Clarke famously said, “Any sufficiently advanced technology is indistinguishable from magic.”

A giant in the history of the computer – indeed of the world we know today – died this month. His name was Gordon Moore.

At the inaugural International Solid-State Circuits Conference held on the campus of the University of Pennsylvania in Philadelphia in 1960, a young computer engineer named Douglas Engelbart introduced the electronics industry to the remarkably simple but groundbreaking concept of “scaling.”

Dr. Engelbart, who would later help develop the computer mouse and other personal computing technologies, theorized that as electronic circuits were made smaller, their components would get faster, require less power and become cheaper to produce — all at an accelerating pace.

Sitting in the audience that day was Gordon Moore, who went on to help found the Intel Corporation, the world’s largest chip maker. In 1965, Dr. Moore quantified the scaling principle and laid out what would have the impact of a computer-age Magna Carta. He predicted that the number of transistors that could be etched on a chip would double annually for at least a decade, leading to astronomical increases in computer power.

His prediction appeared in Electronics Magazine in April 1965 – over a half-century ago – and was later called Moore’s Law. It was never a law of physics, but rather an observation about the economics of a young industry that ended up holding true for a half-century.

Read more in this thoughtful NYT piece here

 

Are We the Masters or Servants?

W2nA69u

I’ve used this blog to talk about technology and to see where technology is pushing the edges of the envelope and not just delivering a handier Twitter app. Technology continues to dazzle us. But as Arthur Clarke famously said, “Any sufficiently advanced technology is indistinguishable from magic.”

But sometimes this “magic” has made many people wonder and filled others with fear. Few people can forget the chilling scene from Stanley Kubrick’s 2001: A Space Odyssey: Astronauts David Bowman and Frank Poole consider disconnecting HAL’s (Heuristically programmed ALgorithmic computer) cognitive circuits when he appears to be mistaken in reporting the presence of a fault in the spacecraft’s communications antenna. They attempt to conceal what they are saying, but are unaware that HAL can read their lips. Faced with the prospect of disconnection, HAL decides to kill the astronauts in order to protect and continue its programmed directives.

Are we still the masters of technology – or are we the servants? Movies like Ex Machina can make us wonder. Unfortunately, there is often more heat than light on the subject.

 

Read more in this thoughtful NYT Magazine piece here:

http://www.nytimes.com/2013/10/13/magazine/all-is-fair-in-love-and-twitter.html?_r=0

High Tech Nirvana?

0131-BKS-CVR-master675

Like many of you, I’m a big believer in technology, especially high-tech that springs from the big brains in Silicon Valley and elsewhere. But here’s the question: Will the best brains of the future build things resembling past innovations like cars and electricity or will they spend all their time making Twitter more user-friendly?

It’s worth asking: are the strides we are seeing in high-technology today really going to change our lives that profoundly and usher-in the same kind of life-altering changes past technology revolutions have. Many think it will. But without being a “techno-phoebe,” Robert Gordon takes a different view, and his arguments are compelling.

His new book, The Rise and Fall of American Growth, takes a thoughtful look at previous revolutions and without dismissing today’s tech revolution, and looks at how truly life-changing previous revolutions were. Here is part of what noted economist Paul Krugman says in his review of Gordon’s 762-page book:

I was fascinated by Gordon’s account of the changes wrought by his Great Inventions. As he says, “Except in the rural South, daily life for every American changed beyond recognition between 1870 and 1940.” Electric lights replaced candles and whale oil, flush toilets replaced outhouses, cars and electric trains replaced horses. (In the 1880s, parts of New York’s financial district were seven feet deep in manure.)

Meanwhile, backbreaking toil both in the workplace and in the home was for the most part replaced by far less onerous employment. This is a point all too often missed by economists, who tend to think only about how much purchasing power people have, not about what they have to do to get it, and Gordon does an important service by reminding us that the conditions under which men and women labor are as important as the amount they get paid.

Aside from its being an interesting story, however, why is it important to study this transformation? Mainly, Gordon suggests — although these are my words, not his — to provide a baseline. What happened between 1870 and 1940, he argues, and I would agree, is what real transformation looks like. Any claims about current progress need to be compared with that baseline to see how they measure up.

And it’s hard not to agree with him that nothing that has happened since is remotely comparable. Urban life in America on the eve of World War II was already recognizably modern; you or I could walk into a 1940s apartment, with its indoor plumbing, gas range, electric lights, refrigerator and telephone, and we’d find it basically functional. We’d be annoyed at the lack of television and Internet — but not horrified or disgusted.

By contrast, urban Americans from 1940 walking into 1870-style accommodations — which they could still do in the rural South — were indeed horrified and disgusted. Life fundamentally improved between 1870 and 1940 in a way it hasn’t since.

Something to think about as we pin our hopes for the future on today’s emerging technology.

You can read Krugman’s full review here:

http://www.nytimes.com/2016/01/31/books/review/the-powers-that-were.html?_r=0

Does Technology Change the World?

1228-bks-GERTNER-sub-1-master315

Does technology change the world? And is it the “lone genius” who gives us these gifts? These are huge questions for all of us. Here is what Jon Gertner shares in “Unforeseeable Consequences:”

At various points in “How We Got to Now,” Steven Johnson helps us see how innovation is almost never the result of a lone genius experiencing a sudden voilà! moment; it’s a complex process involving a dizzying number of inputs, individuals, setbacks and (sometimes) accidents. Also, it’s hardly the exclusive domain of private-sector entrepreneurs. Important ideas are often driven by academics, governments and philanthropists.

Above all, though, technological histories like this help us reckon with how much we miss by focusing too exclusively on economic, cultural and political history. Not that any one domain is superior to another — only that Johnson proves you can’t explain one without the others. He does seem to suggest that technological history may have an advantage in one regard: It not only helps readers better see where we’ve been, but urges us to think harder about where we’re going.
Read the entire article here

Drone Wars

One of the most innovative technologies used anywhere – and especially in our military – is unmanned or autonomous systems, sometimes called “drones.” These military drones have been talked about a great deal in all media, especially armed drones which are often operated by the U.S. intelligence agencies or our military to take out suspected terrorists.

Few security issues are more controversial. In an effort to shed some light in an area where there is mostly heat, I published an article with Faircount Media entitled “The Other Side of Autonomy.”

A few salient quotes from this article capture the controversy surrounding “drone wars.”

In an article entitled, “Morals and the Machine,” The Economist addressed the issue of autonomy and humans-in-the-loop this way:

As they become smarter and more widespread, autonomous machines are bound to end up making life-or-death decisions in unpredictable situations, thus assuming—or at least appearing to assume—moral agency. Weapons systems currently have human operators “in the loop”, but as they grow more sophisticated, it will be possible to shift to “on the loop” operation, with machines carrying out orders autonomously. As that happens, they will be presented with ethical dilemmas…More collaboration is required between engineers, ethicists, lawyers and policymakers, all of whom would draw up very different types of rules if they were left to their own devices.

Bill Keller put the issue of autonomy for unmanned systems this way in his Op-ed, “Smart Drones,” in the New York Times in March 2013:

If you find the use of remotely piloted warrior drones troubling, imagine that the decision to kill a suspected enemy is not made by an operator in a distant control room, but by the machine itself. Imagine that an aerial robot studies the landscape below, recognizes hostile activity, calculates that there is minimal risk of collateral damage, and then, with no human in the loop, pulls the trigger. Welcome to the future of warfare. While Americans are debating the president’s power to order assassination by drone, powerful momentum – scientific, military and commercial – is propelling us toward the day when we cede the same lethal authority to software.

Looking ahead to this year and beyond, it is clear that “drone warfare” will continue to be extremely controversial. Stay tuned!

Click here to read the entire article (PDF)