Paula Hawkins

0918-BKS-ByTheBook-blog427

One of the most popular writers today is Paula Hawkins, author of The Girl on the Train. And for many of us, we’re always interested in learning about what great writers read: Some excerpts:

What books are currently on your night stand?

“As If,” by Blake Morrison; “The Underground Railroad,” by Colson Whitehead; Virginia Woolf’s “A Writer’s Diary.” I’m also listening to the audiobook of “A Brief History of Seven Killings,” by Marlon James.

What’s the last great book you read?

“A Little Life,” by Hanya Yanagihara. I came to it rather late — I’d been put off by what I’d heard about the upsetting subject matter, but when I heard Hanya speak about the book at the Sydney Writers’ Festival in May I changed my mind. And I’m so glad I did, because while it was every bit as traumatic as everyone said it would be, it is also a remarkable study of friendship, suffering and the difficulty of recovery. Incidentally it is the first audiobook I have ever listened to, and I’m now a total convert. I’d forgotten what a joyous thing it is to allow yourself to be told a story.

Want more? You can read the full piece here.

Artificial Intelligence

26mag-explicableai-image1-articleLarge

Few technologies have had a big an impact – and promise to have more in the future – than artificial intelligence, or AI.

That’s why it was no surprise that the New York Times Magazine featured an article entitled, “Can A.I. Be Taught to Explain Itself.” For me, it was riveting. Some excerpts:

It has become commonplace to hear that machines, armed with machine learning, can outperform humans at decidedly human tasks, from playing Go to playing “Jeopardy!” We assume that is because computers simply have more data-crunching power than our soggy three-pound brains. Kosinski’s results suggested something stranger: that artificial intelligences often excel by developing whole new ways of seeing, or even thinking, that are inscrutable to us. It’s a more profound version of what’s often called the “black box” problem — the inability to discern exactly what machines are doing when they’re teaching themselves novel skills — and it has become a central concern in artificial-intelligence research. In many arenas, A.I. methods have advanced with startling speed; deep neural networks can now detect certain kinds of cancer as accurately as a human. But human doctors still have to make the decisions — and they won’t trust an A.I. unless it can explain itself.

“Artificial intelligence” is a misnomer, an airy and evocative term that can be shaded with whatever notions we might have about what “intelligence” is in the first place. Researchers today prefer the term “machine learning,” which better describes what makes such algorithms powerful.

The idea was to connect leading A.I. researchers with experts in data visualization and human-computer interaction to see what new tools they might invent to find patterns in huge sets of data. There to judge the ideas, and act as hypothetical users, were analysts for the C.I.A., the N.S.A. and sundry other American intelligence agencies.

Even if a machine made perfect decisions, a human would still have to take responsibility for them — and if the machine’s rationale was beyond reckoning, that could never happen.

Intrigued? You can read the full article here

Time to Party…Or?

14lachman-master768

By almost any measure, the U.S. and the world economy are booming. We seem to have moved well-beyond the 2008 recession and are moving forward on all cylinders.

That’s why I found Desmond Lachman’s New York Times article, “The Global Economy Is Partying Like It’s 2008,” so intriguing. He wonders if we’re in another bubble. He begins like this:

Certainly, the American economy is doing well, and emerging economies are picking up steam. But global asset prices are once again rising rapidly above their underlying value — in other words, they are in a bubble. Considering the virtual silence among economists about the danger they pose, one has to wonder whether in a year or two, when those bubbles eventually burst.

This silence is all the more surprising considering how much more pervasive bubbles are today than they were 10 years ago. While in 2008 bubbles were largely confined to the American housing and credit markets, they are now to be found in almost every corner of the world economy.

 

Want more? You can read the full piece here.

Best Books

08BOOKSOFTHEYEAR1-blog427-v3

One thing many of us look forward to on Sundays is to read the New York Times Book Review section. Here, outstanding writers offer their opinions on new books. They never seem to miss.

For much the same reason, we look forward to the Times year-end “best books” list. For 2017, the list is especially rich. Here is how the article begins:

This was a year when books — like the rest of us — tried to keep up with the news, and did a pretty good job of it. Novels about global interconnectedness, political violence and migration; deeply reported nonfiction accounts of racial and economic strife in the United States; stories both imagined and real about gender, desire and the role of beauty in the natural world. There were several worthy works of escapism, of course, but the literary world mostly reflected the gravity and tumult of the larger world. Below, The New York Times’s three daily book critics — Dwight Garner, Jennifer Senior and Parul Sehgal — share their thoughts about their favorites among the books they reviewed this year, each list alphabetical by author. Janet Maslin, a former staff critic who remains a frequent contributor to The Times, also lists her favorites.

Want more? You can read the full article here.

Tech Rising

What do technology and architecture have in common? Your first reaction might be, “not much,” but a closer look at what is happening to the San Francisco skyline might change your mind.

David Streitfeld’s recent piece, “San Francisco’s Skyline, Now Inescapably Transformed by Tech,” features the subtitle: “Salesforce Tower, which at 1,070 feet is the tallest office building west of the Mississippi, will be inhabited in January, signaling tech’s triumph in the city.”

This short piece in The Sunday New York Times Business Section, marks not just an association, but a marriage, between technology and architecture

Streitfeld notes that in Silicon Valley, the office parks blend into the landscape. They might have made their workers exceedingly rich, they might have changed the world — whether for better or worse is currently up for debate — but there is nothing about them that says: We are a big deal.

Skyscrapers tell a different story. They are the pyramids of our civilization, permanent monuments of our existence. They show who is in charge and what they think about themselves. Salesforce Tower is breaking a San Francisco height record that stood for nearly half a century.

Intrigued? You can read the full article here.

Stories

25leonhardtWeb-master768

Ever since our cavemen ancestors drew pictures of their successful hunt (it had to be successful, or they wouldn’t have returned to talk about it), Homo sapiens have been enticed by stories.

And while we’ve known that over past generations, today, that notion is under attack by the idea of making “informed decisions” supported by DATA.

But now there is push-back, and that’s why I found David Leonhardt’s recent piece, “What I Was Wrong About This Year,” so refreshing. It puts an explanation point on the need for STORY. Here is how he begins:

The Israeli intelligence service asked the great psychologist Daniel Kahneman for help in the 1970s, and Kahneman came back with a suggestion: Get rid of the classic intelligence report. It allows leaders to justify any conclusion they want, Kahneman said. In its place, he suggested giving the leaders estimated probabilities of events.

The intelligence service did so, and an early report concluded that one scenario would increase the chance of full-scale war with Syria by 10 percent. Seeing the number, a top official was relieved. “Ten percent increase?” he said. “That is a small difference.”

Kahneman was horrified (as Michael Lewis recounts in his book “The Undoing Project”). A 10 percent increase in the chance of catastrophic war was serious. Yet the official decided that 10 wasn’t so different from zero.

Looking back years later, Kahneman said: “No one ever made a decision because of a number. They need a story.”

Want more? You can read the full article here.

Law of Innovation

BN-WH159_KEYWOR_P_20171123163122

Most businesses are “all about innovation.” We made innovation a buzz word, but few really have done a deep dive into what innovation means, especially in business.

While such a broad term defines simple explanation – and can mean many thinks to many people, I found Christopher Mims “Laws of Innovation” piece in the Wall Street Journal helpful in bounding the challenge. Here are his “laws.”

Three decades ago, a historian wrote six laws to explain society’s unease with the power and pervasiveness of technology. Though based on historical examples taken from the Cold War, the laws read as a cheat sheet for explaining our era of Facebook, Google, the iPhone and FOMO.

You’ve probably never heard of these principles or their author, Melvin Kranzberg, a professor of the history of technology at Georgia Institute of Technology who died in 1995.

What’s a bigger shame is that most of the innovators today, who are building the services and tools that have upended society, don’t know them, either.

Fortunately, the laws have been passed down by a small group of technologists who say they have profoundly impacted their thinking. The text should serve as a foundation—something like a Hippocratic oath—for all people who build things.

  1. ‘Technology is neither good nor bad; nor is it neutral’
  2. ‘Invention is the mother of necessity.’
  3. ‘Technology comes in packages, big and small.
  4. ‘Although technology might be a prime element in many public issues, nontechnical factors take precedence in technology-policy decisions.’
  5. ‘All history is relevant, but the history of technology is the most relevant.’
  6. ‘Technology is a very human activity.’

Want more? You can read the full piece here

Lost Wars

16Bacevich-master1050

The terrorist attacks on September 11, 2001 profoundly change America’s national security equation – perhaps forever.

Those attacks spawned the wars in Afghanistan and Iraq, and these have all-but-consumed the U.S. military for more than a decade-and-a-half.

There have been several books – some good and some less so – that have tried to help us come to grips with not only why we embarked upon these wars, as well as why we can’t “win.”

Andrew Bacevich’s review of Daniel Bolger’s book, “Why We Lost,” offers some key insights. Here is how he begins:

The author of this book has a lot to answer for. “I am a United States Army general,” Daniel Bolger writes, “and I lost the Global War on Terrorism.” The fault is not his alone, of course. Bolger’s peers offered plenty of help. As he sees it, in both Afghanistan and Iraq, abysmal generalship pretty much doomed American efforts.

The judgment that those wars qualify as lost — loss defined as failing to achieve stated objectives — is surely correct. On that score, Bolger’s honesty is refreshing, even if his explanation for that failure falls short. In measured doses, self-flagellation cleanses and clarifies. But heaping all the blame on America’s generals lets too many others off the hook.

Why exactly did American military leaders get so much so wrong? Bolger floats several answers to that question but settles on this one: With American forces designed for short, decisive campaigns, the challenges posed by protracted irregular warfare caught senior officers completely by surprise.

Since there aren’t enough soldiers — having “outsourced defense to the willing,” the American people stay on the sidelines — the generals asked for more time and more money. This meant sending the same troops back again and again, perhaps a bit better equipped than the last time. With stubbornness supplanting purpose, the military persisted, “in the vain hope that something might somehow improve.

Want more? You can read the full article here

AI and You!

merlin_129555845_9af63388-6b04-422c-a297-f879e0d7287d-master768

Few subjects have captured the public’s imagination today more than artificial intelligence (AI) and machine learning. A niche, tech subject just a few years ago, AI has now gone mainstream.

Part of this is because we are surrounded by digital aps like Siri and Cortana inform and entertain us daily (just ask Siri “What is zero divided by zero).

But AI will play a much more profound role in our lives in the future. But we may have to wait for it. Here is part of what Steve Lohr shared recently in a New York Times piece:

There are basically three big questions about artificial intelligence and its impact on the economy: What can it do? Where is it headed? And how fast will it spread?

Three new reports combine to suggest these answers: It can probably do less right now than you think. But it will eventually do more than you probably think, in more places than you probably think, and will probably evolve faster than powerful technologies have in the past.

This bundle of research is itself a sign of the A.I. boom. Researchers across disciplines are scrambling to understand the likely trajectory, reach and influence of the technology — already finding its way into things like self-driving cars and image recognition online — in all its dimensions. Doing so raises a host of challenges of definition and measurement, because the field is moving quickly — and because companies are branding things A.I. for marketing purposes.

An “AI Index,” created by researchers at Stanford University, the Massachusetts Institute of Technology and other organizations, released on Thursday, tracks developments in artificial intelligence by measuring aspects like technical progress, investment, research citations and university enrollments. The goal of the project is to collect, curate and continually update data to better inform scientists, businesspeople, policymakers and the public.

Want more? You can read the full article here

Thinking Well?

jdc_b457

As human beings, we pride ourselves on being rationale…after all…we’re not lemmings running off the end of a cliff…right?

I thought we were, that is, until I read a short op-ed by David Brooks. Here is part of what he said about how rationale we are:

Richard Thaler has just won an extremely well deserved Nobel Prize in economics. Thaler took an obvious point, that people don’t always behave rationally, and showed the ways we are systematically irrational.

Thanks to his work and others’, we know a lot more about the biases and anomalies that distort our perception and thinking, like the endowment effect (once you own something you value it more than before you owned it), mental accounting (you think about a dollar in your pocket differently than you think about a dollar in the bank) and all the rest.

It’s when we get to the social world that things really get gnarly. A lot of our thinking is for bonding, not truth-seeking, so most of us are quite willing to think or say anything that will help us be liked by our group. We’re quite willing to disparage anyone when, as Marilynne Robinson once put it, “the reward is the pleasure of sharing an attitude one knows is socially approved.” And when we don’t really know a subject well enough, in T. S. Eliot’s words, “we tend always to substitute emotions for thoughts,” and go with whatever idea makes us feel popular.

Want more? You can read the full article here