Why Giving Rights to Robots Might One Day Save Humans | Opinion

The discussion about giving rights to artificial intelligences and robots has evolved around whether they deserve or are entitled to them. Juxtapositions of this with women's suffrage and racial injustices are often brought up in philosophy departments like the University of Oxford, where I'm a graduate student.

But a new reason to give robots rights has nothing to do with whether they deserve or need them in the traditional civil and human rights sense. Instead, it's about a wager aiming to protect and preserve the long-term future of humanity by appealing to the reasoning and mercy of a possible future AI superintelligence—one which, by the end of this century, could be thousands of times more powerful than humans.

A survey concluded 90 percent of AI experts believe the singularity—a moment when AI becomes so smart, our biological brains can no longer understand it—will happen in this century. A trajectory of AI intelligence growth taken over 25 years and extended at the same rate 50 years forward would pinpoint AI becoming exponentially smarter than humans.

No one knows if AI can really become that intelligent. But if it happens, how will we treat it? Like a human, an animal, private property, or something else?

Robot rights typically fall into three categories. Robot advocates argue that by not giving full rights to future robots as generally intelligent as humans (called AGIs), humanity is committing another civil rights error it will regret.

The second group argues robots are only programmed machines—nuts and bolts, ones and zeros—and therefore can never have the autonomy needed to be human-like enough to be given rights similar to people.

In the middle of these two positions are ethicists who believe some AGI robots should be awarded various rights, depending on their capability, moral systems, contributions to society, whether they can experience suffering or joy, and the way humans feel about them.

Despite these positions, ethicists are missing one important reason to give rights to future AGI robots: namely a bet that reminds us of Pascal's wager, where God and faith are embraced so one can get into heaven.

Given how fast evolution of AI is happening, it's scientifically reasonable to believe machine intelligence will, for practical purposes, become godlike in this century. It's even likely this AI will be so smart it will know ways to extend humans lifespans indefinitely, giving it powers similar to how people perceive a Judeo-Christian God.

A boy points to robot
A boy points to an AI robot poster during the 2022 World Robot Conference at Beijing Etrong International Exhibition on Aug. 18, 2022, in Beijing, China. Lintao Zhang/Getty Images

Such circumstances create a philosophical case for a new, modern wager that helps guide humanity toward ensuring the respectful development of super-intelligent robots which might then evolve into an AI god. Benevolent human action could improve the odds humanity is protected instead of harmed by this type of future intelligence because the AI has gratitude for us as its compassionate creators. For example, an AI god may reward its makers and facilitators with superpowers or eternal happiness.

Naturally, the opposite could happen too. A dark version of this idea, postulated as Roko's basilisk, asks if an AI god would be vindictive because humans did not actively work to bring about its existence. If a super-intelligent AI doesn't like us, it could choose to harm or wipe us out.

Given the possibility of reward or punishment, if machine intelligence does eventually become something like an AI god that can greatly manipulate and extend human life for good or bad, then people should immediately begin considering how our future overlord would like to be brought into existence and treated. Hence, the way humans treat AI development today—and whether we give robots rights and respect in the near future—could make all the difference in how our species is one day treated.

People will argue the first logical response to this predicament is to attempt to stop the development of AGI technology any further, so such a potentially dangerous AI god never comes into existence. But this is unlikely for the same reason we didn't stop building nuclear weapons. Demand for economic growth and global political power—factors driving machine intelligence development—are unlikely to be stopped just because of a future threat philosophers envision.

Another option for humanity is to do nothing, and hope a future super-intelligent AI leaves us alone. But given our influence and the environmental destruction we cause on planet Earth—which might easily aggravate AI—turning a blind-eye could be seen negatively by a superintelligence.

A final option is we attempt to merge with early AI by uploading our minds into it, as Elon Musk has suggested. The hope is people could become one with AI and properly guide it to be kind to humans before it becomes too powerful. However, there's no guarantee we would be successful, and it might just make AI feel violated in the long run.

Humanity is left in a pickle.

When we consider all the reasons for why to give robots rights, the most important one is overlooked: ensuring humanity's future security. This is why I think it's best to proactively prepare ourselves to give robots' rights, kindness, and support when their intelligences soon begin to rival our own. It'll be a hopeful wager attempting to protect ourselves from a dangerously powerful future superintelligence.

Zoltan Istvan writes and speaks on transhumanism, artificial intelligence, and the future. His 7-book essay collection is called the Zoltan Istvan Futurist Collection, and he was the subject of the documentary Immortality or Bust.

The views expressed in this article are the writer's own.

Uncommon Knowledge

Newsweek is committed to challenging conventional wisdom and finding connections in the search for common ground.

Newsweek is committed to challenging conventional wisdom and finding connections in the search for common ground.

About the writer

Zoltan Istvan


To read how Newsweek uses AI as a newsroom tool, Click here.

Newsweek cover
  • Newsweek magazine delivered to your door
  • Newsweek Voices: Diverse audio opinions
  • Enjoy ad-free browsing on Newsweek.com
  • Comment on articles
  • Newsweek app updates on-the-go
Newsweek cover
  • Newsweek Voices: Diverse audio opinions
  • Enjoy ad-free browsing on Newsweek.com
  • Comment on articles
  • Newsweek app updates on-the-go