The Trolley Problem: Scientists Ask 2 Million People Who Autonomous Cars Should Kill in Unavoidable Crashes

GettyImages-659266224
An Uber self-driving car drives down 5th Street on March 28, 2017 in San Francisco, California. The program was temporarily halted following a crash in Tempe, Arizona. Justin Sullivan/Getty Images

As artificial intelligence continues to advance rapidly, concerns are growing about how machines, such as autonomous vehicles, will make moral decisions—particularly those where human lives are at stake.

To shine a light on this issue, researchers from MIT conducted a massive global survey in the hopes of gauging what people think about how autonomous vehicles should behave in such situations.

The online survey, which involved more than two million people from 233 countries and territories, focused on a classical ethical thought experiment known as the "Trolley Problem"—a scenario in which an accident involving an autonomous vehicle (AV) cannot be avoided and the machine must decide between one of two potentially fatal options.

For example, a driverless car might face the choice of swerving to hit one or two jaywalkers or a larger group of pedestrian bystanders.

"The study is basically trying to understand the kinds of moral decisions that driverless cars might have to resort to," Edmond Awad, lead author of the study from the MIT Media Lab, said in a statement. "We don't know yet how they should do that."

In their experiment, the researchers developed a multilingual online game—called "Moral Machine"—in which participants had to state their preferential outcome in a series of dilemmas that autonomous vehicles might face.

In total, the Moral Machine collected nearly 40 million individual decisions, which the researchers then analyzed as a whole, or in groups defined by the age, education, gender, income and political or religious views of the participants.

"Back in 2016, the autonomous vehicle industry was growing and the technology was advancing, but there had not been much of a conversation about the societal impact of AVs," Sohan Dsouza, an author of the study from the MIT Media Lab, told Newsweek. "Our group had already been studying the social dilemma of autonomous vehicle adoption with respect to choosing between passengers and pedestrians."

"However, given the number of interacting real-world variables potentially involved, we found available surveying platforms inadequate for full exploration of that variable space," he said. "That's why we built the Moral Machine—so we could get enough data to tease out the relative importance of those different factors."

Dsouza said it was important to investigate this to help inform the conversation about society's expectations of AV ethics and predict the likely reactions to AV crashes.

"Studying this at a global cross-cultural level has enabled us to observe how the relative ethical priorities people expect of AI can vary across cultures, and what might influence these expectations."

The survey results—which are published in the journal Nature, uncovered three elements that people around the world seem to agree on the most. First, human lives should be spared over those of animals; many people should be saved over a few; and younger people should be preserved ahead of the elderly.

"The main preferences were to some degree universally agreed upon," Awad said. "But the degree to which they agree with this or not varies among different groups or countries."

These regional differences manifested themselves in a variety of ways. For example, in an "eastern" group of countries, including many in Asia, the researchers found a less pronounced tendency to favor younger people rather than the elderly, in comparison to southern groups of countries.

"We also observed correlations between various national metrics—such as rule of law, economic inequality, and cultural distance—and relevant preferences of people in those countries. For example, relatively stronger preference to sacrifice in favor of high-status individuals in countries with higher economic inequality," Dsouza said.

The researchers say that acknowledging people's moral preferences could inform the way that the software controlling autonomous vehicles is designed.

"First, the research was intended to seed—and has been seeding—the conversation in society about autonomous vehicles and ethics," Dsouza said. "It could help lawmakers and automakers understand what fears they would need to allay in different parts of the world to encourage adoption. And autonomous tech policy makers could have a local ground truth of how AV ethics are perceived, which can help inform their guidelines as to what ethical decision-making factors to specifically prescribe or prohibit for consideration by AI."

Furthermore, in light of the large amounts of public interest in the study, the authors recommend that those at the helm of technological innovations should seek the views of ordinary people when public safety could be affected.

"What we have tried to do in this project, and what I would hope becomes more common, is to create public engagement in these sorts of decisions," Awad said.

Toby Walsh, a professor of artificial intelligence at the University of New South Wales in Australia, who was not involved in the research, said the results of the study are "interesting" and "provocative" but warns that while the results reveal much about people's expectations of how such autonomous systems should operate, these expectations should not necessarily guide their behavior.

"The values we give machines should not be some blurred average of a particular country or countries," he said in a statement. "In fact, we should hold machines to higher ethical standards than humans for many reasons: Because we can, because this is the only way humans will trust them, because they have none of our human weaknesses, and because they will sense the world more precisely and respond more quickly than humans possibly can."

The study raises significant questions about who we should protect in life and death situations involving autonomous vehicles, according to Iain MacGill, another researcher who was not involved in the latest work, also from UNSW.

"However, even with sufficient societal consensus on what we would like these vehicles to do in the case of unavoidable answers, we still face the challenge of coding such 'ethics' into autonomous vehicles," he said. "And then persuading people to buy vehicles that explicitly put the safety of other road users at the same or perhaps even higher priority than themselves."

"It doesn't help that we have companies racing to bring these vehicles to market with what seems to be insufficient regard to the societal risks invariably involved with new technology deployment," MacGill said. "And can we trust the companies driving this, some with significant questions about their own 'winner takes all' business ethics, to appropriately program socially agreed ethics into their products?"

Colin Gavaghan, a professor from the University of Otago in New Zealand, said that these kinds of Trolley Problems are philosophically fascinating, but until now they've rarely been much concern for the law.

"The law tends to be pretty forgiving of people who respond instinctively to sudden emergencies. The possibility of programming ethics into a driverless car, though, takes this to another level," he said. "That being so, which ethics should we program? And how much should that be dictated by majority views? Some of the preferences expressed in this research would be hard to square with our approaches to discrimination and equality—favoring lives on the basis of sex or income, for instance, really wouldn't pass muster here.

"One preference that might be easier to understand and to accommodate is for the car to save as many lives as possible," he continued. "Sometimes, that might mean ploughing ahead into the logging truck rather than swerving into the group of cyclists. Most of us might recognize that as the 'right' thing to do, but would we buy a car that sacrificed our lives—or the lives of our loved ones—for the good of the many?"

This article has been updated to include additional comments from Sohan Dsouza.

Uncommon Knowledge

Newsweek is committed to challenging conventional wisdom and finding connections in the search for common ground.

Newsweek is committed to challenging conventional wisdom and finding connections in the search for common ground.

About the writer


Aristos is a Newsweek science reporter with the London, U.K., bureau. He reports on science and health topics, including; animal, ... Read more

To read how Newsweek uses AI as a newsroom tool, Click here.
Newsweek cover
  • Newsweek magazine delivered to your door
  • Newsweek Voices: Diverse audio opinions
  • Enjoy ad-free browsing on Newsweek.com
  • Comment on articles
  • Newsweek app updates on-the-go
Newsweek cover
  • Newsweek Voices: Diverse audio opinions
  • Enjoy ad-free browsing on Newsweek.com
  • Comment on articles
  • Newsweek app updates on-the-go