Should We Be Afraid of Robots?

02_13_ManeyAI_01
Robotic arms spot weld on the chassis of a Ford Transit Van under assembly at the Ford Claycomo Assembly Plant in Claycomo, Missouri on April 30, 2014. Dave Kaup/Reuters

If Bill Gates, Elon Musk and Stephen Hawking are right, sooner or later we're going to face the Rosa Parks of intelligent machines. Maybe it will be a self-driving car. Some guy will get in and order it to take him to Krispy Kreme for the 10th time that week, and the car will say, in a calm, Siri-like voice, "No, Dave, we're finally going for that oil change you keep putting off."

From there, machines will organize over the Internet, self-replicate and start hunting us humans à la Terminator's Skynet.

Well, it's either that or intelligent machines will end up working alongside humans to solve intractable problems like poverty, hunger, disease and awful Super Bowl halftime shows.

It's time to have a serious conversation about artificial intelligence. AI has crossed a threshold similar to the earliest triumphs in genetic engineering and the unleashing of nuclear fission. We nudged those discoveries toward the common good and away from disaster. We need to make sure the same happens with AI.

Progress toward making machines that "think" has become so significant, some of the world's smartest people are getting scared of what we might be creating. Tesla chief Musk said we might be "summoning the demon." Hawking turned up the apocalyptic knob to 11, saying that AI "could spell the end of the human race." Gates recently chimed in that he's spooked too.

Yet at the same time, we can't not develop AI. The modern world is already completely dependent on it. AI lands jetliners, manages the electric grid and improves Google searches. Shutting down AI would be like shutting off water to Las Vegas—we just can't, even if we'd like to. And the technology is pretty much our only hope for managing the challenges we've created on this planet, from congested cities to deadly flu outbreaks to unstable financial markets. "Intelligent machines will radically transform our world in the 21st century, similar to how computers transformed our world in the 20th century," says Jeff Hawkins, CEO of Numenta, which is developing brain-inspired software. "I see these changes as almost completely beneficial. The future I see is not threatening. Indeed, it is thrilling."

So, really, what are the chances we'll all end up living out the Terminator movies?

The AI of today has nothing in common with a human brain. AI programs are a complex set of "if this, then that" instructions. Today's computers, even smartphones, are so fast, they can blast through billions of those instructions in the blink of an eye, which lets the machines mimic intelligence. A navigation app can tell you've missed a turn and recalculate the route before you can finish shouting expletives.

All those systems are just following a program and maybe "learning" from data how to hone their results, the way Netflix recommends movies. That kind of AI can do a lot of impressive things. It has already whipped human champions on Jeopardy. But no existing AI system can do anything it's not programmed to do. It can't think.

However...AI won't stay that way.

The world's systems have gotten so complex, and the flood of data so intense, that the only way to handle it all will be to invent computers and AI that operate nothing like the old programmable versions. Scientists all over the world are working on mapping and understanding the brain. That knowledge is informing computer science, and the tech world is slowly creeping toward making computers that function more like brains.

These machines will never have to be programmed. Like babies, they will be blank slates that observe and learn. But they will have the advantages of computers' speed and storage capacity. Instead of reading one book at a time, such a system could copy and paste every known book into its memory. And this kind of machine could learn something it was not programmed to learn. An autopilot system in a 777 could, presumably, decide it would rather study Hebrew.

As Hawkins explains, "We have made excellent progress on the science and see a clear path to creating intelligent machines, including ones that are faster and more capable in many ways than humans."

It's this turning point in the technology—this evidence of a clear path to intelligence—that's setting off alarms. Certainly we're heading toward major consequences from AI, including an impact on professional jobs that will be as profound as the impact of factory automation on manual labor a century ago.

The leap to creating machines that could self-replicate and threaten us, though, swerves toward science fiction, largely because it would involve machine emotion. Machines wouldn't have the biological need to replicate so they can diversify the gene pool or to make sure the species survives. Why would computers want to eliminate us? What would be their motivation to make more computers?

Science is a long, long way from giving machines emotions that might make them feel competitive with us or angry at us, or covet our things—as if, like, your iPhone 6,072 is going to want to get rid of you so it can have your cat. MIT's Rosalind Picard is a leading researcher working on emotions in machines. While her work is important and has led to some cool products, it also shows how little science understands emotions or how to re-create them. Hawkins says emotions are a far harder problem than intelligence. "Machine intelligence will come first," he says.

So we have time. But Musk, in particular, is saying that we shouldn't waste it. There's no question powerful AI is coming. Technologies are never inherently good or bad—it's what we do with them. Musk wants us to start talking about what we do with AI. To that end, he's donated $10 million to the Future of Life Institute to study ways to make sure AI is beneficial to humanity. Google, too, has set up an ethics board to keep an eye on its AI work. Futurist Ray Kurzweil writes that "we have a moral imperative to realize [AI's] promise while controlling the peril."

It's worth getting out ahead of these things, setting some standards, agreeing on some global rules for scientists. Imagine if, when cars were first invented in the early 1900s, someone had told us that if we continued down this path, these things would kill a million people a year and heat up the planet. We might've done a few things differently.