An issue brief written by PVMI Director John Paul MacDuffie for the Penn Wharton Public Policy Initiative
In March, news broke that a self-driving car belonging to Uber accidentally struck and killed a pedestrian in Arizona.
It was the first non-passenger death caused by an autonomous vehicle (AV) in the United States. Two years earlier, in May 2016, Tesla made headlines when one of its cars with automated capabilities collided with a truck on a Florida highway while in “Autopilot” mode, killing the driver who had not responded to the car’s sensors beckoning him to reassume full control of the driving task. That incident will forever be remembered as the first “self-driving” car death. Despite the daily tragedies that unfold across the U.S., which witnessed 37,461 driving-related deaths in 2016, stories like these become breaking news because of their intrigue. A future with self-driving cars dominating the streets and highways of America could mean far fewer driving-related deaths, and it’s a future sought by government (at all levels), industry, and many of the potential users of these vehicles. But when accidents inevitably occur during these years of technological development, many people reasonably ask, “What risks are we (society) willing to accept to advance this technology?” and “What are policymakers doing to mitigate these risks?” The American public will need to answer these questions many times over in the coming years, but the federal government has already made its preferences clear, at least for the moment.