Who Brings Us AI?

00aisweatshop1-articleLarge-v2

When someone mentions artificial intelligence – AI – we typically think of some Silicon Valley tech titan dressed in jeans and an ever-so-sheik sport coat.

But as they say, that’s just the tip of the iceberg. Few of us understand how enormous troves of data needed to have AI gets assembled and crunched.

Cade Metz helps us understand the unseen underbelly of the tech industry. It’s a revealing – and troubling – look at the cost of doing business to get that next cool app. Here’s how she begins:

BHUBANESWAR, India — Namita Pradhan sat at a desk in downtown Bhubaneswar, India, about 40 miles from the Bay of Bengal, staring at a video recorded in a hospital on the other side of the world.

The video showed the inside of someone’s colon. Ms. Pradhan was looking for polyps, small growths in the large intestine that could lead to cancer. When she found one — they look a bit like a slimy, angry pimple — she marked it with her computer mouse and keyboard, drawing a digital circle around the tiny bulge.

She was not trained as a doctor, but she was helping to teach an artificial intelligence system that could eventually do the work of a doctor.

Ms. Pradhan was one of dozens of young Indian women and men lined up at desks on the fourth floor of a small office building. They were trained to annotate all kinds of digital images, pinpointing everything from stop signs and pedestrians in street scenes to factories and oil tankers in satellite photos.

A.I., most people in the tech industry would tell you, is the future of their industry, and it is improving fast thanks to something called machine learning. But tech executives rarely discuss the labor-intensive process that goes into its creation. A.I. is learning from humans. Lots and lots of humans.

Want more? You can read the full article here

Charisma

00sl_charisma-superJumbo

How many times have you heard someone say: “He (or she) has charisma.” Certain people seem to have it, while most of us think we don’t.

Not to put too fine a point on it, but all of us need at least a little bit of charisma. It’s how we influence people and get along in the world.

That’s why I found this article, “Becoming Charismatic, One Step at a Time,” so fascinating. Here’s how it begins:

Ask people to name someone they find charming and the answers are often predictable. There’s James Bond, the fictional spy with a penchant for shaken martinis. Maybe they’ll mention Oprah Winfrey, Bill Clinton or a historical figure, like the Rev. Dr. Martin Luther King Jr. or Mahatma Gandhi. Now ask the same people to describe, in just a few seconds, what makes these charmers so likable.

It’s here, in defining what exactly charisma is, that most hit a wall. Instinctually, we know that we’re drawn to certain people more than others. Quantifying why we like them is an entirely different exercise.

The ancient Greeks described charisma as a “gift of grace,” an apt descriptor if you believe likability is a God-given trait that comes naturally to some but not others. The truth is that charisma is a learned behavior, a skill to be developed in much the same way that we learned to walk or practice vocabulary when studying a new language. Other desirable traits, like wealth or appearance, are undoubtedly linked to likability, but being born without either doesn’t preclude you from being charismatic.

For all the work put into quantifying charisma — and it’s been studied by experts through the ages, including Plato and those we talked to for this piece — there are still a lot of unknowns. There are, however, two undisputed truths.

The first is that we are almost supernaturally drawn to some people, particularly those we like. Though this is not always the case; we can just as easily be drawn in by a charismatic villain.

The second truth is that we are terrible at putting a finger on what it is that makes these people so captivating. Beyond surface-level observations — a nice smile, or the ability to tell a good story — few of us can quantify, in an instant, what makes charismatic people so magnetic.

Want more? You can read them here

Can You Do It All?

merlin_157861212_cd0ab67c-7733-4200-9f86-4680e0623c34-jumbo

Do you want to do it all? You’re not alone. Most of us have lofty goals – let alone New Year’s resolutions – regarding what we want to accomplish.

That’s tough to do in series – so we do them in parallel. In other words, we multitask. So how is that working for you? It doesn’t for me.

That’s why I was drawn to Daniel Willingham’s op-ed, “The High Price of Multitasking.” He nailed WHY it doesn’t work. Here’s how he began:

Not only do smartphones provide unprecedented access to information, they provide unprecedented opportunities to multitask. Any activity can be accompanied by music, selfies or social media updates. Of course, some people pick poor times to tweet or text, and lawmakers have stepped in. Forty-eight states have banned texting while driving. In Honolulu, it’s illegal to text or even look at your phone while crossing the street, and in the Netherlands they’ve banned texting while biking.

But legislation won’t proscribe all situations in which multitasking is unwise; you need to self-regulate. Understanding how the brain multitasks and why we find multitasking so appealing will help you gauge the hazard of pulling out your phone.

Multitasking feels like doing two things simultaneously, so it seems the danger lies in asking one mental process to do two incompatible things — for texting drivers, watching the screen and the road. A lot of lawmakers must think that way, because 20 states have instituted bans on driving using a hand-held phone while still allowing hands-free calls. Yet hands-free or hand-held makes no difference — they impair driving equivalently as far as external hazards go. Why?

You actually manipulate your phone only briefly for voice calls. The real problem is the toggling of attention between the conversation and the road. Even simple tasks can’t be done simultaneously; you switch between them, and that affects performance.

But people don’t multitask solely because they see no harm in it; they perceive benefits. They say they multitask for efficiency, to fight boredom or to keep up with social media.

Music, likely the most common variety of multitasking, is added to tasks because it heightens arousal (for example, your heart rate increases), making it easier to stick with a long drive or a tedious textbook. Music was once common on factory assembly lines; the British Broadcasting Corporation offered a radio program for this purpose, “Music While You Work,” from 1940 until 1967.

Thus, even if you fully appreciate the cognitive cost, you might tolerate it in exchange for the emotional lift. Parents disapprove when their child studies with deadmau5 blasting because they compare that with studying in silence. But the child calculates that without the music, he wouldn’t study.

Want more? You can read the full article here

Decide!

d23547eb2cc04836922a88ee5bc3dfb8-jumbo

Whether it’s procrastination – or something deeper – many of us have challenges deciding.

Our ancestors didn’t have this issue – just surviving was an issue.

Today, with our embarrassment of riches, we have SO many choices.

I don’t know if you have trouble deciding, but I do.

That’s why I found Susan Shain’s recent piece, “Making a Decision Doesn’t Have to Be So Hard,” so refreshing, and helpful. Here’s how she begins:

Should you order tacos or tikka masala? Stay at the hotel with the free breakfast or the one with all the succulents? Melt into the couch or drag yourself to happy hour?

If you’re like me, even the simplest decisions can make your pulse race. And when it comes to big, life-altering choices, the need to get it right (because life is short!), combined with ever-looming F.O.B.O. (fear of better options), can cause a state of near paralysis.

While this abundance of choice is a result of incredible privilege — not everyone has the freedom to select where they work or live, or how to spend their time or money — it can still be overwhelming. As Barry Schwartz, the author of “The Paradox of Choice,” said, “I’m reasonably confident we’re operating with far, far more options in most parts of our life than we need and that serve us.”

Here are five strategies for spending less time agonizing over decisions and more time appreciating the results….

Want more? You can read them here

 

Our Phones – Ourselves

27newport-superJumbo

Are you reading this on your phone? It’s likely that you are and that your smart phone is such a constant companion that it is on your person 24/7.

Was this the plan when Steve Jobs first introduced this magical device? Not at all suggests Cal Newport. Here is how he began his insightful piece:

Smartphones are our constant companions. For many of us, their glowing screens are a ubiquitous presence, drawing us in with endless diversions, like the warm ping of social approval delivered in the forms of likes and retweets, and the algorithmically amplified outrage of the latest “breaking” news or controversy. They’re in our hands, as soon as we wake, and command our attention until the final moments before we fall asleep.

Steve Jobs would not approve.

In 2007, Mr. Jobs took the stage at the Moscone Convention Center in San Francisco and introduced the world to the iPhone. If you watch the full speech, you’ll be surprised by how he imagined our relationship with this iconic invention, because this vision is so different from the way most of us use these devices now.

In the remarks, after discussing the phone’s interface and hardware, he spends an extended amount of time demonstrating how the device leverages the touch screen before detailing the many ways Apple engineers improved the age-old process of making phone calls. “It’s the best iPod we’ve ever made,” Mr. Jobs exclaims at one point. “The killer app is making calls,” he later adds. Both lines spark thunderous applause. He doesn’t dedicate any significant time to discussing the phone’s internet connectivity features until more than 30 minutes into the address.

The presentation confirms that Mr. Jobs envisioned a simpler and more constrained iPhone experience than the one we actually have over a decade later. For example, he doesn’t focus much on apps. When the iPhone was first introduced there was no App Store, and this was by design. As Andy Grignon, an original member of the iPhone team, told me when I was researching this topic, Mr. Jobs didn’t trust third-party developers to offer the same level of aesthetically pleasing and stable experiences that Apple programmers could produce. He was convinced that the phone’s carefully designed native features were enough. It was “an iPod that made phone calls,” Mr. Grignon said to me.

Mr. Jobs seemed to understand the iPhone as something that would help us with a small number of activities — listening to music, placing calls, generating directions. He didn’t seek to radically change the rhythm of users’ daily lives. He simply wanted to take experiences we already found important and make them better.

The minimalist vision for the iPhone he offered in 2007 is unrecognizable today — and that’s a shame.

Under what I call the “constant companion model,” we now see our smartphones as always-on portals to information. Instead of improving activities that we found important before this technology existed, this model changes what we pay attention to in the first place — often in ways designed to benefit the stock price of attention-economy conglomerates, not our satisfaction and well-being.

Want more? You can read the full article here

Three Cheers for Generalists

26epstein-superJumbo

I follow sports intently, and have read a great deal about the radically different paths followed by Tiger Woods and Roger Federer.

But when I read a recent article by David Epstein entitled, “You Don’t Want a Child Prodigy,” a great deal crystalized for me. Here’s how he began:

One Thursday in January, I hit “send” on the last round of edits for a new book about how society undervalues generalists — people who cultivate broad interests, zigzag in their careers and delay picking an area of expertise. Later that night, my wife started having intermittent contractions. By Sunday, I was wheeling my son’s bassinet down a hospital hallway toward a volunteer harpist, fantasizing about a music career launched in the maternity ward.

A friend had been teasing me for months about whether, as a parent, I would be able to listen to my own advice, or whether I would be a “do as I write, not as I do” dad, telling everyone else to slow down while I hustle to mold a baby genius. That’s right, I told him, sharing all of this research is part of my plan to sabotage the competition while secretly raising the Tiger Woods of blockchain (or perhaps the harp).

I do find the Tiger Woods story incredibly compelling; there is a reason it may be the most famous tale of development ever. Even if you don’t know the details, you’ve probably absorbed the gist.

Woods was 7 months old when his father gave him a putter, which he dragged around in his circular baby-walker. At 2, he showed off his drive on national television. By 21, he was the best golfer in the world. There were, to be sure, personal and professional bumps along the way, but in April he became the second-oldest player ever to win the Masters. Woods’s tale spawned an early-specialization industry.

And yet, I knew that his path was not the only way to the top.

Consider Roger Federer. Just a year before Woods won this most recent Masters, Federer, at 36, became the oldest tennis player ever to be ranked No. 1 in the world. But as a child, Federer was not solely focused on tennis. He dabbled in skiing, wrestling, swimming, skateboarding and squash. He played basketball, handball, tennis, table tennis and soccer (and badminton over his neighbor’s fence). Federer later credited the variety of sports with developing his athleticism and coordination.

While Tiger’s story is much better known, when sports scientists study top athletes, they find that the Roger pattern is the standard. Athletes who go on to become elite usually have a “sampling period.” They try a variety of sports, gain a breadth of general skills, learn about their own abilities and proclivities, and delay specializing until later than their peers who plateau at lower levels. The way to develop the best 20-year-old athlete, it turns out, is not the same as the way to make the best 10-year-old athlete.

MAYBE LATE BLOOMERS DO BEST!

Want more? You can read the full article here

Sports and Life

07McLean3-superJumbo

Much has been written about how sports are a metaphor for life. While not everyone will agree with that statement, it is, from my perspective, largely true.

It’s no surprise then, that in the midst of the NBA finals, an article on the cover of Sunday’s New York Times Business Section talked about John McLean.

McLean is featured in this prominent article as premier wealth manager of the N.B.A. elite. A bevy of NBA superstars have hired him to manage their money – and their lives.

What I found most compelling in this article in the illustration featured here. I don’t know about you, but I find it a helpful reminder for everyday life.

Want more? You can read it here

Founding Fathers

12Atkinson1-articleLarge

Few Americans – or others for that matter – would argue that the American Revolution was one of the most iconic events of the last millennium. But we tend to have mixed feelings about the men (all men) at the center of the revolution.

That’s why I found Rick Atkinson’s recent piece, “Whey We Still Care About America’s Founders,” so compelling. Here is how he begins:

There’s a lot to dislike about the founding fathers and the war they and others fought for American independence.

The stirring assertion that “all men are created equal” did not, of course, apply to 500,000 black slaves — one in five of all souls occupying the 13 colonies when those words were written in 1776. Nor was it valid for Native Americans, women or indigents.

And yet, the creation story of America’s founding remains valid, vivid and exhilarating. At a time when national unity is elusive, when our partisan rancor seems ever more toxic, when the simple concept of truth is disputed, that story informs who we are, where we came from, what our forebears believed and — perhaps the profoundest question any people can ask themselves — what they were willing to die for.

What can we learn from that ancient quarrel? First, that this nation was born bickering; disputation is in the national genome. Second, that there are foundational truths that not only are indeed true, but also, as the Declaration of Independence insists, “self-evident.” Third, that leaders worthy of our enduring admiration rise to the occasion with acumen, grit, wisdom and grace. And fourth, that whatever trials befall us today, we have overcome greater perils.

There is a great deal more in his piece, which you can read at the link below, but if you are interested in the American Revolution, here are two books I highly recommend:

  • “1776” by David McCullough
  • “Six Frigates” by Ian Toll

Want more? You can read it here

Too Busy?

merlin_153636627_13135309-5f13-4d72-86c3-90defa9d7295-jumbo

Full disclosure…I am busy. Really busy! Or am I? I always pine for some relaxation, but feeling the need to do one more thing usually overwhelms me.

That’s why I was drawn to a recent review of a book “How To Do Nothing: Resisting the Attention Economy.” Here’s how it begins:

In 2015, Jenny Odell started an organization she called The Bureau of Suspended Objects. Odell was then an artist-in-residence at a waste operating station in San Francisco. As the sole employee of her bureau, she photographed things that had been thrown out and learned about their histories. (A bird-watcher, Odell is friendly with a pair of crows that sit outside her apartment window; given her talent for scavenging, you wonder whether they’ve shared tips.)

Odell’s first book, “How to Do Nothing: Resisting the Attention Economy,” echoes the approach she took with her bureau, creating a collage (or maybe it’s a compost heap) of ideas about detaching from life online, built out of scraps collected from artists, writers, critics and philosophers. In the book’s first chapter, she remarks that she finds things that already exist “infinitely more interesting than anything I could possibly make.” Then, summoning the ideas of others, she goes on to construct a complex, smart and ambitious book that at first reads like a self-help manual, then blossoms into a wide-ranging political manifesto.

Though trained as an artist, Odell has gradually become known for her writing. Her consistent theme is the invasion of the wider world by internet grotesqueries grown in the toxic slime of Amazon, Instagram and other social media platforms. She has a knack for evoking the malaise that comes from feeling surrounded by online things. Like many of us, she would like to get away from that feeling.

Odell suggests that she has done this, semi-successfully, by striking a stance of public refusal and by retraining her attention to focus on her surroundings. She argues that because the internet strips us of our sense of place and time, we can counter its force by resituating ourselves within our physical environment, by becoming closer to the natural world.

Want more? You can read the full article here

AI and Humans

merlin_151980906_58d03ca7-9863-4f63-83fd-463608d6d89e-superJumbo

We have posted a great deal about artificial intelligence and its impact on us. That’s why I was drawn to a recent article, “Without Humans, A.I Can Wreak Havoc.” Here’s how it begins:

The year 1989 is often remembered for events that challenged the Cold War world order, from the protests in Tiananmen Square to the fall of the Berlin Wall. It is less well remembered for what is considered the birth of the World Wide Web. In March of 1989, the British researcher Tim Berners-Lee shared the protocols, including HTML, URL and HTTP that enabled the internet to become a place of communication and collaboration across the globe.

As the World Wide Web marks its 30th birthday on Tuesday, public discourse is dominated by alarm about Big Tech, data privacy and viral disinformation. Tech executives have been called to testify before Congress, a popular campaign dissuaded Amazon from opening a second headquarters in New York and the United Kingdom is going after social media companies that it calls “digital gangsters.” Implicit in this tech-lash is nostalgia for a more innocent online era.

But longing for a return to the internet’s yesteryears isn’t constructive. In the early days, access to the web was expensive and exclusive, and it was not reflective or inclusive of society as a whole. What is worth revisiting is less how it felt or operated, but what the early web stood for. Those first principles of creativity, connection and collaboration are worth reconsidering today as we reflect on the past and the future promise of our digitized society.

The early days of the internet were febrile with dreams about how it might transform our world, connecting the planet and democratizing access to knowledge and power. It has certainly effected great change, if not always what its founders anticipated. If a new democratic global commons didn’t quite emerge, a new demos certainly did: An internet of people who created it, shared it and reciprocated in its use.

This is just a snippet. Want more? You can read the full article here.