Artificial Intelligence Could Result in 'Catastrophic' Nuclear War by 2040, Think Tank Warns

Advances in technology and artificial intelligence may put humanity on the fast track toward an international nuclear war, a new research report suggests. Experts fear technological advances may push government leaders to constantly update their nuclear weapons, while simultaneously putting them at risk to launch these weapons based on advice from flawed or tampered AI.

RAND Corporation, a research organization that develops solutions to public policy challenges, released Monday their evaluation of how advances in AI may affect the future of nuclear weapons. The results are based on a combination of insight from experts in various fields, including nuclear security, artificial intelligence, government and industry. The experts were asked to imagine nuclear weapon systems in 2040, and explore how AI may affect them.

"The fear that computers, by mistake or malice, might lead humanity to the brink of nuclear annihilation has haunted imaginations since the earliest days of the Cold War," RAND noted. "The danger might soon be more science than fiction. Stunning advances in AI have created machines that can learn and think, provoking a new arms race among the world's major nuclear powers. It's not the killer robots of Hollywood blockbusters that we need to worry about; it's how computers might challenge the basic rules of nuclear deterrence and lead humans into making devastating decisions."

Related: Robots Will Help Us Embrace Humanity, Not Lose It

The researchers explained that as nuclear technology advances, countries may worry that their enemies are always one step ahead of them. This may push them to further advance their own nuclear technology as a protective measure. This would have an ongoing effect, with each country continuing to advance their technology out of fear.

04_25_missile
Nuclear weapons may become a bigger issue in the future. A mock North Korean missile. Kim Jae-Hwan/AFP/Getty Images

"Advances in AI have provoked a new kind of arms race among nuclear powers. This technology could challenge the basic rules of nuclear deterrence and lead to catastrophic miscalculations," the report read.

The report stresses that AI can be hacked and fed wrong information. This is especially worrisome if this information is what government leaders rely on to help make decisions about whether to launch an attack. A similar mistake almost happened in 1983 when a former Soviet military officer spotted an incorrect warning on a computer that the U.S. had launched several missiles, CNBC reported.

"Autonomous systems don't need to kill people to undermine stability and make catastrophic war more likely," said Edward Geist, an associate policy researcher at RAND, a specialist in nuclear security and co-author of the new paper, in a statement in a RAND blog post.

Related: Robots Can Now Read Better Than Humans, Putting Millions Of Jobs At Risk

The future isn't entirely doomful. RAND experts suggested that with the right amount of global cooperation, AI could make us safer.

"To err is human, after all. A machine that makes no mistakes, feels no pressure, and has no personal bias could provide a level of stability that the Atomic Age has never known," the report read.

Uncommon Knowledge

Newsweek is committed to challenging conventional wisdom and finding connections in the search for common ground.

Newsweek is committed to challenging conventional wisdom and finding connections in the search for common ground.

About the writer


To read how Newsweek uses AI as a newsroom tool, Click here.

Newsweek cover
  • Newsweek magazine delivered to your door
  • Newsweek Voices: Diverse audio opinions
  • Enjoy ad-free browsing on Newsweek.com
  • Comment on articles
  • Newsweek app updates on-the-go
Newsweek cover
  • Newsweek Voices: Diverse audio opinions
  • Enjoy ad-free browsing on Newsweek.com
  • Comment on articles
  • Newsweek app updates on-the-go