Nick Bostrom: Google Is Winning the Artificial Intelligence Arms Race

artificial intelligence nick bostrom Google
A Google Android mascot pictured in Zenica, Bosnia and Herzegovina, June 12, 2015. Google is leading the way in developing AI that potentially poses an existential threat to humanity. REUTERS/Dado Ruvic

Google is leading the way in the global race to create human-level artificial intelligence, according to leading AI expert Nick Bostrom.

Speaking at the IP Expo conference in London on Wednesday, October 5, Bostrom said that there are several companies and organizations that are currently focused on developing human-level AI, or artificial general intelligence.

"There are different bets on what approach [to developing human-level AI] is most promising, and since we don't know what approach will ultimately work, there is some uncertainty there," Bostrom said in response to a question from Newsweek .

"Baidu, Open AI, and all the large tech companies have various kinds of AI efforts that if they were to become specifically directed to this aim, they have a lot of resources."

When pushed to back just one company that is currently leading the field, Bostrom said that Google's DeepMind was the clear frontrunner.

"At this point in time I think that DeepMind is very strong…it is probably the largest group specifically trying to solve general intelligence," Bostrom said. "But if this happens three decades from now, there might be some entirely new thing that doesn't exist yet, just as three decades ago a lot of the current players wouldn't be on the table. A lot could change many times over in the remaining time."

nick bostrom artificial intelligence google
Nick Bostrom gained worldwide notoriety following his seminal book on AI Superintelligence. University of Oxford

Swedish philosopher Bostrom, who heads the Future of Humanity Institute at the University of Oxford, gained worldwide attention in 2014 with the release of his seminal work Superintelligence. Following its publication, Stephen Hawking, Bill Gates and Elon Musk were among those to raise concerns about the implications of the existential threat that artificial intelligence poses to humanity.

According to Musk, advanced AI could be "more dangerous than nukes," while Hawking suggested that it could lead to the end of humanity. Both have since joined Bostrom in signing an open letter on artificial intelligence calling for research priorities that would mitigate against such threats.

Since signing the letter, Musk has alluded heavily to the fact that Google is the "only one" that he is worried about when it comes to the development of advanced artificial intelligence.

Google's 'Big Red Button'

Introducing a "super intelligent" system, Bostrom argues, would see humans replaced as the dominant life form on Earth—and potentially wiped out. Ultimately, the main concern is that the first machine to surpass human capabilities will be impossible to switch off. Speaking at a TED (technology, entertainment and design) conference last year, Bostrom hypothesized why neanderthals hadn't "flicked the off switch" with humans when we became the dominant species.

"They certainly had reasons," Bostrom said. "The reason is that we are an intelligent adversary. We can anticipate threats and plan around them. But so could a super intelligent agent and it would be much better at that than we are."

Fortunately, this issue is something that Google is already working on—in the form of a "big red button" that would act as an off switch for a rogue artificial intelligence agent. Having been acquired by Google in 2014 for $500 million, DeepMind has become the search giant's AI flag bearer, making headlines earlier this year for its creation of the first computer capable of beating a human champion at the boardgame Go.

Google red button
Google's "big red button" would act as an off switch for a rogue artificial intelligence agent. WikiCommons

In June, researchers from DeepMind and Bostrom's Future of Humanity Institute put forward the idea of an off switch in a peer-reviewed paper titled Safely Interruptible Agents. The paper outlined a framework for preventing advanced machines from ignoring turn-off commands and becoming out of human control.

"Safe interruptibility can be useful to take control of a robot that is misbehaving and may lead to irreversible consequences," the paper stated. "If such an agent is operating in real-time under human supervision, now and then it may be necessary for a human operator to press the big red button to prevent the agent from continuing a harmful sequence of actions."

While such measures could ultimately save humanity, Bostrom warned that the artificial intelligence race could ultimately be won by a company who does not take such precautions.

"There is a control problem," he said. "If you have a very tight tech race to get there first, whoever invests in safety could lose the race. This could exacerbate the risks from out of control AI."

This article has been updated to acknowledge that the Safely Interruptible Agents paper was co-authored by both DeepMind and the Future of Humanity Institute.

Uncommon Knowledge

Newsweek is committed to challenging conventional wisdom and finding connections in the search for common ground.

Newsweek is committed to challenging conventional wisdom and finding connections in the search for common ground.

About the writer


Anthony Cuthbertson is a staff writer at Newsweek, based in London.  

Anthony's awards include Digital Writer of the Year (Online ... Read more

To read how Newsweek uses AI as a newsroom tool, Click here.

Newsweek cover
  • Newsweek magazine delivered to your door
  • Newsweek Voices: Diverse audio opinions
  • Enjoy ad-free browsing on Newsweek.com
  • Comment on articles
  • Newsweek app updates on-the-go
Newsweek cover
  • Newsweek Voices: Diverse audio opinions
  • Enjoy ad-free browsing on Newsweek.com
  • Comment on articles
  • Newsweek app updates on-the-go