Why AI Panic is Not About Safety | Opinion

Is artificial intelligence (AI) as dangerous as nuclear war? Some within the tech industry are now suggesting that's the case. But even in light of real risks, a closer look reveals that tech insiders may be seeking to benefit from creating panic around AI.

A recent statement by the newly-formed Center for AI Safety warned about the risk of "extinction" of the human race, putting ChatGPT on par with pandemics and atomic weaponry. Among the statement's signatories are many credible scientists, but of particular note are Sam Altman, chief executive of OpenAI, and Demis Hassabis, chief executive of Google DeepMind. These CEOs are also courting U.S. political figures to try and secure more rules, both giving Senate testimony and directly meeting with President Joe Biden and Vice President Kamala Harris.

Why would the CEOs of companies that have spent countless dollars on AI research suddenly be warning us that they've been endangering the human race this whole time? If they really believe that, why did they spend so many resources on AI in the first place?

Altman's signature becomes even more contradictory in the context of a recent threat he made to pull out of the European Union (EU) if OpenAI's ChatGPT is a victim of "over-regulating." He canceled a scheduled visit to Brussels recently, where new AI legislation is in the process of approval by the European Parliament, but kept his appointments with French President Emmanuel Macron and U.K. Prime Minister Rishi Sunak.

If Altman is any indication, the industry wants to be regulated, but only on the terms that it sets. That's a phenomenon known as regulatory capture. Masked by the language of protecting the general public, AI creators are simply pursuing their own particular interests. This becomes clear when you learn about the kind of regulations Altman would prefer. According to The New York Times, he "expressed support for rules that would require makers of large, cutting-edge AI models to register for a government-issued license."

"We think that regulatory intervention by governments will be critical to mitigate the risks of increasingly powerful models," Altman said to a Senate subcommittee. "For example, the U.S. government might consider a combination of licensing and testing requirements for development and release of AI models above a threshold of capabilities."

It doesn't take a rocket (or AI) scientist to figure out why the top AI developers might be interested in such a rule. When pressed by the Senate on what threshold might be appropriate to require a license, Altman's answer was wide open for interpretation.

"I think a model that can persuade, manipulate, or influence a person's behavior, or a person's beliefs—that would be a good threshold," he said.

A simple chat bot could conceivably persuade or influence a person without anything dangerous taking place. Factual information can and should influence our beliefs.

Setting aside suspicion about the broader intentions of mandating an AI developer's license, any fair standard would have to be far more clearly defined than possibly impacting our beliefs—answering questions, generating content, and providing information is exactly what any AI that interfaces with the public is intended to do.

 Samuel Altman, CEO of OpenAI
Sam Altman, CEO of OpenAI, testifies before the Senate Judiciary Subcommittee on Privacy, Technology, and the Law on May 16, 2023, in Washington, D.C. Win McNamee/Getty Images

Putting restrictions on Western AI won't stop other nations from developing AI intended to persuade or even manipulate behavior. It is inevitable that state-sponsored information warfare AIs will exist, on all sides, and the internet is a global phenomenon transcending borders. Education on deep-fakes and similar technologies will have to supersede futile attempts to close Pandora's box by limiting who can conduct research. Society will still need to adapt to AI proliferation, including separating fact from fiction with our own judgment when necessary.

The proposal to regulate AI through state-licenses for research doesn't address the real risks or opportunities AI presents. Instead, powerful industry players appear to be seeking an environment in which smaller firms without the right political connections risk penalties for loosely-defined misinformation, while the AI developers working hand in glove with governments are at least advantaged, if not the only ones able to develop AI tools on a scale that's commercially viable.

Requiring government approval to create AI won't just create monopolies either. Political institutions aren't magically neutral, and state-sponsorship is hardly a guarantee that content will be free of political bias or misinformation. AI is ultimately trained on biased human-produced data, with content filters determined by humans, so "biased AI" is in the eye of the beholder.

Altman's own ChatGPT, which he presumably believes to be worthy of government approval given his advocacy, has itself been accused of political bias by figures like Elon Musk. Academic research suggested this as well, as seen in a pre-print study by researchers from the Technical University of Munich and the University of Hamburg. Whether Musk is correct or not, his challenge raises the very real possibility that licenses based on AI's potential to "influence beliefs" will merely become another battlefield for partisan culture warring.

Skepticism around those driving this push doesn't mean AI is risk-free. If proposals were oriented around ensuring AI is positive for the workers poised to be economically impacted, rather than focusing on who gets state-sponsorship to develop the AI which replaces them, there might be room for an interesting debate. But when an industry suddenly begs to be regulated afterthey introduce something they claim could wipe out life on Earth, it's worth asking what's really going on.

Grant Gallagher is a writer in the advertising industry working with science and technology-oriented clients. He is a host of the upcoming history podcast New Disorder: a History of the 21st Century, which explores the contemporary relationship between politics and society.

The views expressed in this article are the writer's own.

Uncommon Knowledge

Newsweek is committed to challenging conventional wisdom and finding connections in the search for common ground.

Newsweek is committed to challenging conventional wisdom and finding connections in the search for common ground.

About the writer

Grant Gallagher


To read how Newsweek uses AI as a newsroom tool, Click here.

Newsweek cover
  • Newsweek magazine delivered to your door
  • Newsweek Voices: Diverse audio opinions
  • Enjoy ad-free browsing on Newsweek.com
  • Comment on articles
  • Newsweek app updates on-the-go
Newsweek cover
  • Newsweek Voices: Diverse audio opinions
  • Enjoy ad-free browsing on Newsweek.com
  • Comment on articles
  • Newsweek app updates on-the-go