Scientists Hope Artificial Stupidity Could Save Humans From an AI Takeover

Some scientists are suggesting limiting how smart artificial intelligence can get—all the way down to human intelligence.

Scientists at Sorbonne University in Paris and the University of Louisville are suggesting that stopping AI from getting too smart might be the only way to save humanity. On August 11, the team published their outlook on Cornell University Library's arXiv, an online system for scientists to publish open-access research papers.

"We say that an AI is made Artificially Stupid on a task when some limitations are deliberately introduced to match a human's ability to do the task," Michaël Trazzi and Roman Yampolskiy wrote in their paper. "An Artificial General Intelligence (AGI) can be made safer by limiting its computing power and memory, or by introducing Artificial Stupidity on certain tasks."

Artificial intelligence that can do anything a human brain can do, called artificial general intelligence, hasn't been created yet. However, the scientists hope their suggestion to limit AI's computational abilities, as well as develop it to behave similarly to humans, is the best way to stop an AI takeover before it even happens.

Chatbots—the chatting machines online that can have a simple conversation or can guide you through customer service on a website—are already often set up to have artificial stupidity. According to the paper, programmers design chatbots to make mistakes to seem more humanlike.

"To obtain an AI that does not exceed by far humans' abilities, for instance in arithmetics, the computing power allowed for mathematical capabilities must be artificially diminished," they wrote. "Besides, humans exhibit cognitive biases, which result in systematic errors in judgment and decision making. In order to build a safe AGI, some of those biases may need to be replicated."

These biases include a courtesy bias, so the AI wouldn't try to not offend others, conservatism, so the AI's values won't change and won't become evil, and the spotlight effect, so the AI will overestimate how many people it thinks is watching it, so it will behave the same way even when under low supervision.

iPal Robots Group
A worker puts finishing touches to an iPal social robot at an assembly plant in Suzhou, China. The scientists hope their suggestion to limit AI’s computational abilities is the best way to stop an AI... ALY SONG/REUTERS

The scientists also suggest that the AI shouldn't be able to improve itself so that it becomes more efficient. "Things like processing limitations could be used to make the AI's thinking interpretable—not enough to cripple it, but to make sure it can't get away with things we aren't able to follow," Stuart Armstrong of the University of Oxford's Future of Humanity Institute commented to New Scientist on the paper.

However, Armstrong doesn't think these limits will be enough. The AI may just make a copy of itself that doesn't have those limitations, so it can get away with whatever it wants.

Uncommon Knowledge

Newsweek is committed to challenging conventional wisdom and finding connections in the search for common ground.

Newsweek is committed to challenging conventional wisdom and finding connections in the search for common ground.

About the writer


To read how Newsweek uses AI as a newsroom tool, Click here.

Newsweek cover
  • Newsweek magazine delivered to your door
  • Newsweek Voices: Diverse audio opinions
  • Enjoy ad-free browsing on Newsweek.com
  • Comment on articles
  • Newsweek app updates on-the-go
Newsweek cover
  • Newsweek Voices: Diverse audio opinions
  • Enjoy ad-free browsing on Newsweek.com
  • Comment on articles
  • Newsweek app updates on-the-go