For many Americans, the threat of a nuclear missile strike from North Korea feels very real at the moment. Much more real than being attacked by an intelligent robot, say.
But according to Elon Musk, the rise of artificial intelligence (AI) poses a much greater threat to humanity than Kim Jong-un's belligerent regime in Pyongyang.
The Tesla and SpaceX chief executive has long warned of the dangers of AI and issued his latest opinion after a bot from OpenAI defeated some of the world's best players in in a professional gaming competition.
Musk shared a picture of a post with the strapline "In the end the machines will win," along with the comment: "If you're not concerned about AI safety, you should be. Vastly more risk than North Korea."
Musk has previously urged governors to legislate for safe uses of AI, stating that robots could replace humans in any kind of job and could be incentivized to harm humans. "AI is a fundamental risk to the existence of human civilization in a way that car accidents, airplane crashes, faulty drugs or bad food were not," Musk told the National Governors Association in July.
Read more: Elon Musk's Hyperloop could be very useful for President Trump
The South African-born entrepreneur also got into a spat with Mark Zuckerberg after the Facebook chief said that AI "naysayers" like Musk were "pretty irresponsible" for proposing "doomsday scenarios." Musk shot back that Zuckerberg's understanding of the subject was "limited."
OpenAI is a nonprofit research company, backed by Musk, which describes itself as "discovering and enacting the path to safe artificial general intelligence." The bot that prompted Musk's tweet was competing in a Seattle e-sports competition against human gamers playing Dota 2, an incredibly popular and complex game that pits two teams of "heroes" against each other with the objective of destroying the other team's base.
The bot developed the skills to beat a top Dota 2 player by playing the game itself for just two weeks, tech magazine The Verge reported.
Musk is not the only famous figure to warn of the dangers of AI. In January, he and Stephen Hawking were among the researchers who signed an open letter, published by the Future of Life Institute, outlining 23 guidelines for the development of AI in ways beneficial to humanity. Among the tenets were that artificial intelligence be developed in service of "widely shared ethical ideals…and all humanity rather than one state or organization," and that an "arms race in lethal autonomous weapons" be avoided.
Uncommon Knowledge
Newsweek is committed to challenging conventional wisdom and finding connections in the search for common ground.
Newsweek is committed to challenging conventional wisdom and finding connections in the search for common ground.
About the writer
Conor is a staff writer for Newsweek covering Africa, with a focus on Nigeria, security and conflict.
To read how Newsweek uses AI as a newsroom tool, Click here.