Who’s Really in Control?

Can machines – robots – be trusted to make moral choices? Do we want to find out – or stop this technology in its tracks?

Imagine it’s a Sunday in the not-too-distant future. An elderly woman named Sylvia is confined to bed and in pain after breaking two ribs in a fall. She is being tended by a helper robot; let’s call it Fabulon. Sylvia calls out to Fabulon asking for a dose of painkiller. What should Fabulon do?

The coders who built Fabulon have programmed it with a set of instructions: The robot must not hurt its human. The robot must do what its human asks it to do. The robot must not administer medication without first contacting its supervisor for permission. On most days, these rules work fine. On this Sunday, though, Fabulon cannot reach the supervisor because the wireless connection in Sylvia’s house is down. Sylvia’s voice is getting louder, and her requests for pain meds become more insistent.

But what about armed drones? The military has developed lethal autonomous weapons systems like the cruise missile and is working on a ground robot to either shoot or hold its fire, based on its assessment of the situation within the international rules of war. It would be programmed, for example, to home in on a permissible target — a person who can be identified as an enemy combatant because he is wearing a uniform, say — or to determine that shooting is not permissible, because the target is in a school or a hospital, or has already been wounded.

There’s something peculiarly comforting in the idea that ethics can be calculated by an algorithm: It’s easier than the panicked, imperfect bargains humans sometimes have to make. But maybe we should be worried about outsourcing morality to robots as easily as we’ve outsourced so many other forms of human labor. Making hard questions easy should give us pause.

Read more here:

http://www.nytimes.com/2015/01/11/magazine/death-by-robot.html

No comments

Post a Reply