Stephen Hawking: Sentient Machines 'Could End Human Race'

Stephen Hawking
Theoretical physicist professor Stephen Hawking is pictured during his lecture on the creation of the Universe at the European Organization for Nuclear Research (CERN) in Meyrin near Geneva September 9, 2009. Valentin Flauraud/Reuters

Professor Stephen Hawking has become the most recent high-profile expert to speak out about the dangers of artificial intelligence (AI), telling the BBC that developing it fully "could spell the end of the human race".

Despite acknowledging that certain forms of AI that have been created so far have been useful - including technology the motor neurone disease sufferer himself uses to help him speak - Hawking, theoretical physicist and author of A Brief History of Time, warned that future developments could be dangerous.

"It would take off on its own, and re-design itself at an ever increasing rate," he said. "Humans, who are limited by slow biological evolution, couldn't compete, and would be superseded."

Hawking is not alone in his concerns. Elon Musk, the billionaire technology entrepreneur declared that AI was the biggest threat to human survival in October during an interview at the Massachusetts Institute of Technology (MIT).

"I think we should be very careful about artificial intelligence," he said. "If I had to guess at what our biggest existential threat is, it's probably that. So we need to be very careful.

"I'm increasingly inclined to think that there should be some regulatory oversight, maybe at the national and international level, just to make sure that we don't do something very foolish."

Despite having invested in AI company DeepMind himself, Musk explained to American news channel CNBC in June that he made this decision "not from the standpoint of actually trying to make any investment return… I like to just keep an eye on what's going on with artificial intelligence. I think there is potentially a dangerous outcome there."

"There have been movies about this, you know, like Terminator. There are some scary outcomes. And we should try to make sure the outcomes are good, not bad," he added.

However, some experts are more positive about the possibilities of artificial intelligence.

David Levy, is a Master of Chess and an AI expert who has won the Loebner prize for the most human-like chatbot twice - once in 1997 and then again in 2009.

In 2007 he published a book called Love, Sex and Robots which claims that human and robot sex will be a common practice by 2050. In a recent interview with Newsweek Levy explained that: "I believe that loving sex robots will be a great boon to society. There are millions of people out there who, for one reason or another, cannot establish good relationships."

Another believer in AI is professor Adrian Cheok from City University London. He and Levy are working together to create a new 'chat agent' software. They hope that the project, called I-Friend, will produce software that can respond to natural human language and speech.

Cheok also believes that AI will allow people to share "digital intimacy" in the future - he is currently developing the 'Kissinger' device that will be able to transfer kisses via two devices that can follow the mouth movement of a human.

Uncommon Knowledge

Newsweek is committed to challenging conventional wisdom and finding connections in the search for common ground.

Newsweek is committed to challenging conventional wisdom and finding connections in the search for common ground.

About the writer


Lucy is the deputy news editor for Newsweek Europe. Twitter: @DraperLucy

To read how Newsweek uses AI as a newsroom tool, Click here.

Newsweek cover
  • Newsweek magazine delivered to your door
  • Newsweek Voices: Diverse audio opinions
  • Enjoy ad-free browsing on Newsweek.com
  • Comment on articles
  • Newsweek app updates on-the-go
Newsweek cover
  • Newsweek Voices: Diverse audio opinions
  • Enjoy ad-free browsing on Newsweek.com
  • Comment on articles
  • Newsweek app updates on-the-go