'Existential Catastrophe' May Loom as No Proof AI Is Controllable—Expert

Artificial intelligence (AI) has the potential to cause an "existential catastrophe" for humanity, a researcher has warned.

Roman Yampolskiy, an associate professor of computer engineering and science at the Speed School of Engineering, University of Louisville, has conducted an extensive review of the relevant scientific literature, stating that he has found no proof AI can be controlled. And even if some partial controls are introduced, these will likely be insufficient, he argues.

As a result, the researcher is of the view that AI should not be developed without this proof. Despite the fact that AI may be one of the most important problems facing humanity, the technology remains poorly understood, poorly defined and poorly researched, according to Yampolskiy, who is an AI safety expert.

The researcher's upcoming book, AI: Unexplainable, Unpredictable, Uncontrollable, explores the ways that AI has the potential to dramatically reshape society—perhaps not always to our advantage.

"We are facing an almost guaranteed event with potential to cause an existential catastrophe. No wonder many consider this to be the most important problem humanity has ever faced. The outcome could be prosperity or extinction, and the fate of the universe hangs in the balance," Yampolskiy said in a press release.

He continued: "Why do so many researchers assume that AI control problem is solvable? To the best of our knowledge, there is no evidence for that, no proof. Before embarking on a quest to build a controlled AI, it is important to show that the problem is solvable. This, combined with statistics that show the development of AI superintelligence is an almost guaranteed event, show we should be supporting a significant AI safety effort."

Yampolskiy argues that our ability to produce intelligent software far outstrips our ability to control or even verify it. In light of his extensive review of the literature, the researcher said advanced AI systems can never fully be controllable and will always present some level of risk, despite any benefits that they might provide.

For Yampolskiy, an AI system becomes "uncontrollable" when it can make decisions or take actions that are not aligned with human intentions, cannot be overridden or shut down by humans, and potentially cause harm.

"This could manifest in various ways, such as an AI optimizing for a goal in a manner that inadvertently causes negative consequences, or an AI developing capabilities that enable it to resist human intervention," Yampolskiy told Newsweek.

"For instance, an AI designed to optimize energy efficiency might do so at the cost of essential services, prioritizing its goal over human needs. Another example is an AI with self-improvement capabilities that evolves beyond its initial safety constraints, making it impossible for humans to predict or control its actions."

An abstract depiction of artificial intelligence
Artificial intelligence (AI) systems have the potential to cause an “existential catastrophe” for humanity, a researcher has warned. iStock

As AI systems become more powerful, their autonomy increases as control over them decreases—leading to potential safety risks.

"Less intelligent agents (people) can't permanently control more intelligent agents (artificial superintelligences). This is not because we may fail to find a safe design for superintelligence in the vast space of all possible designs, it is because no such design is possible, it doesn't exist. Superintelligence is not rebelling, it is uncontrollable to begin with," Yampolskiy said in the press release.

He added: "Humanity is facing a choice, do we become like babies, taken care of but not in control or do we reject having a helpful guardian but remain in charge and free."

The major risks of AI technology include the potential for creating systems that could act in ways not anticipated or desired by their creators, leading to unintended harm, according to the researcher.

"This encompasses everything from privacy violations and job displacement to more severe existential risks if superintelligent AI is developed without adequate safeguards," Yampolskiy said. "On the flip side, the primary benefits of AI are vast, including advancements in medicine, efficiency in resource management, solving complex global challenges, free labor, and enhancing our understanding of the universe. AI has the potential to significantly improve the quality of life for all of humanity."

"While the benefits of AI are immense and transformative, the risks, particularly those associated with superintelligent AI, pose significant concerns. It is not a matter of the risks outweighing the benefits—they do—but rather ensuring that we can harness the benefits without falling prey to the risks."

This requires rigorous safety research, ethical considerations from the outset of AI development, and international cooperation to establish guidelines and frameworks for responsible AI development, according to Yampolskiy.

"My view is that with careful planning, transparency, and global collaboration, we can steer AI development in a direction that maximizes benefits while minimizing risks," he said.

According to Yampolskiy, a possible pathway to mitigate risks would be to sacrifice some of the capabilities of AI in return for some control. He also suggests that AI systems should be modifiable with "undo" options that are transparent and easy to understand in human language.

In addition, the researcher said limited moratoriums or even partial bans on some AI technology should be considered, and called for increased efforts and funding for AI safety research.

"We may not ever get to 100 percent safe AI, but we can make AI safer in proportion to our efforts, which is a lot better than doing nothing. We need to use this opportunity wisely," he said in the press release.

"This book serves as a call to the global community to prioritize safety and ethics in AI development to mitigate existential risks," he said.

Do you have a tip on a science story that Newsweek should be covering? Do you have a question about artificial intelligence? Let us know via science@newsweek.com.

Update 2/14/24, 6:06 a.m. ET: This article has been updated with additional comments from Roman Yampolskiy.

Uncommon Knowledge

Newsweek is committed to challenging conventional wisdom and finding connections in the search for common ground.

Newsweek is committed to challenging conventional wisdom and finding connections in the search for common ground.

About the writer


Aristos is a Newsweek science reporter with the London, U.K., bureau. He reports on science and health topics, including; animal, ... Read more

To read how Newsweek uses AI as a newsroom tool, Click here.

Newsweek cover
  • Newsweek magazine delivered to your door
  • Newsweek Voices: Diverse audio opinions
  • Enjoy ad-free browsing on Newsweek.com
  • Comment on articles
  • Newsweek app updates on-the-go
Newsweek cover
  • Newsweek Voices: Diverse audio opinions
  • Enjoy ad-free browsing on Newsweek.com
  • Comment on articles
  • Newsweek app updates on-the-go