When Will We Know Self-Driving Cars Are Safe?

09_24_Self_Drive_01
Google employee Reko Ong next to a prototype of the company's self-driving vehicle, in Mountain View, California, on September 29, 2015. Nidhi Kalra writes that there is an argument to be made that autonomous vehicles... Elijah Nouvelage/reuters

The National Highway Traffic Safety Administration on Tuesday released much-anticipated guidelines intended to outline best practices for autonomous vehicle safety.

State and local policymakers, the transportation industry and the public are looking to that guidance to answer a key question: Will autonomous vehicles be safe before they are allowed on the road for consumer use?

The answer: maybe. And that might be the best that can be said.

To answer the overarching question of safety, the guidelines would need to articulate two things: first, how autonomous vehicle safety should be measured, and second, what threshold of safety would be required before autonomous vehicles are made publicly available.

In essence, what test do autonomous vehicles have to take and what constitutes a passing grade?

Both are genuinely open questions. There are no road tests that could judge how safe a vehicle is—there are just too many conditions and scenarios to test them all. Driving tests at the DMV also don't prove how good people are at driving—but that feels OK to most people because they have different expectations for human versus robot drivers. It is also infeasible to ask developers to test-drive them to prove they are safe.

We recently showed that autonomous vehicles would have to be driven an astronomical number of miles—literally—before their safety could be demonstrated with statistical confidence. This would take decades, time during which human drivers would be causing fatalities at an alarming rate.

And, human drivers aren't getting safer: 2015 saw the largest increase in fatalities in the U.S. in over 50 years, partly because Americans drove more and partly because they drove worse.

Meanwhile, there are no simulations, modeling or other methods of proving safety. So, it is no surprise that federal guidelines have not answered this difficult question.

The second issue of how safe autonomous vehicles should be also is worth considering, even if it cannot be proven how safe they are. Some will insist that anything short of totally eliminating risk is a safety compromise. They might feel that humans can make mistakes, but not machines.

But, again, waiting for autonomous vehicles to operate perfectly misses opportunities to save lives by keeping far-from-perfect human drivers behind the wheel.

It seems sensible that autonomous vehicles should be allowed on America's roads when they are judged safer than the average human driver, allowing more lives to be saved and sooner while still ensuring they don't create new risks.

But, there is even an argument to be made that autonomous vehicles should be allowed even if they're not as safe as average human drivers if developers can use early deployment as a way to rapidly improve the vehicles. They might become at least as good as the average human faster than they would otherwise be, and thus save more lives overall.

The lack of consensus on this point is not a failure of sound thinking. It is not a failure at all, but rather a genuine expression of Americans' different values and beliefs when it comes to humans vs. machines. And so, once again, it is reasonable that federal guidelines do not draw a line in the sand.

So what do the guidelines do? They add a foundation for and bring transparency to the issue of safety by requiring developers to articulate how their vehicle behaves and meets a diverse set of safety objectives. These include, among other things, how unmanned vehicles will detect and react to traffic, respond in the case of failure or crash, manage privacy and cybersecurity, behave ethically, comply with laws, and so on.

The guidelines also require developers to describe how they will educate consumers and how consumers will interact with the vehicle. This moves in the right direction in two ways.

First, it sets a minimum safety bar that developers must clear before they can make their autonomous vehicles available to the public. This bar is necessary (though not provably sufficient) for safety.

But the transparency is perhaps even more important. Assuming that developers' information will be available in some form to the public, it would give consumers the ability to make choices about whether or not to use one autonomous vehicle or another.

Of course, this may be of little comfort to those who do not want to use autonomous vehicles at all, but may have to share the road with them anyway.

In sum, the guidelines don't guarantee safety, but they move the needle in the right direction.

Nidhi Kalra is a senior information scientist at the nonprofit, nonpartisan RAND Corporation, a co-director of RAND's Center for Decision Making under Uncertainty, and a professor at the Pardee RAND Graduate School.

Uncommon Knowledge

Newsweek is committed to challenging conventional wisdom and finding connections in the search for common ground.

Newsweek is committed to challenging conventional wisdom and finding connections in the search for common ground.

About the writer

Nidhi Kalra

To read how Newsweek uses AI as a newsroom tool, Click here.

Newsweek cover
  • Newsweek magazine delivered to your door
  • Newsweek Voices: Diverse audio opinions
  • Enjoy ad-free browsing on Newsweek.com
  • Comment on articles
  • Newsweek app updates on-the-go
Newsweek cover
  • Newsweek Voices: Diverse audio opinions
  • Enjoy ad-free browsing on Newsweek.com
  • Comment on articles
  • Newsweek app updates on-the-go