The moral case for self-driving cars: welcoming our new robot chauffeurs.

AuthorBailey, Ronald
PositionColumns

Tesla, Nissan, Google, and several carmakers have declared that they will have commercial self-driving cars on the highways before the end of this decade. Experts at the Institute of Electrical and Electronics Engineers predict that 75 percent of cars will be self-driving by 2040. So far California, Nevada, Florida, Michigan, and the District of Columbia have passed laws explicitly legalizing self-driving vehicles, and many other states are looking to do so.

The coming era of autonomous autos raises concerns about legal liability and safety, but there are good reasons to believe that robot cars may exceed human drivers when it comes to practical and even ethical decision making.

More than 90 percent of all traffic accidents are the result of human error. In 2011, there were 5.3 million automobile crashes in the United States, resulting in more than 2.2 million injuries and 32,000 deaths. Americans spend $230 billion annually to cover the costs of accidents, accounting for approximately 2 to 3 percent of GDP.

Proponents of autonomous cars argue that they will be much safer than vehicles driven by distracted and error-prone humans. The longest-running safety tests have been conducted by Google, whose autonomous vehicles have traveled more than 700,000 miles so far with only one accident (when a human driver rear-ended the car). So far, so good.

Stanford University law professor Bryant Walker Smith, however, correctly observes that there are no engineered systems that are perfectly safe. Smith has roughly calculated that "Google's cars would need to drive themselves more than 725,000 representative miles without incident for us to say with 99 percent confidence that they crash less frequently than conventional cars." Given expected improvements in sensor technologies, algorithms, and computation, it seems likely that this safety benchmark will soon be met.

Still, all systems fail eventually. So who will be liable when a robot car--howsoever rarely-- crashes into someone?

An April 2014 report from the good-government think tank the Brookings Institution argues that the current liability system can handle the vast majority of claims that might arise from damages caused by self-driving cars. A similar April 2014 report from the free market Competitive Enterprise Institute (CEI) largely agrees, "Products liability is an area that may be able to sufficiently evolve through common law without statutory or administrative intervention."

A January 2014...

To continue reading

Request your trial

VLEX uses login cookies to provide you with a better browsing experience. If you click on 'Accept' or continue browsing this site we consider that you accept our cookie policy. ACCEPT