The technology behind autonomous navigation is developing quite rapidly. From our early investment in Cruise, we’ve gotten a vantage point on just how much progress has been made in the past three years alone. But if I’m confident about the tech, I’m less sure about the target. That is, how “safe” do autonomous vehicles need to be? And what is today’s reality? 35,092 automotive deaths in the US in the year 2015 (on a per capita basis, down significantly from the 1970s).
So is the target number for autonomous vehicles 35,091 deaths per year? That seems rational but would require us *not* overreacting to headlines which blame errors by the technology for deaths.
Or is the number some smaller percentage of current fatalities – say 10,000 deaths per year – owing to the emotional reaction that the benefits will need to be quantitatively significant in order to make drivers (and regulators) comfortable with giving up control to the machines?
Maybe controversially we should be able to tolerate more deaths per year in the move to autonomy. Certainly if you look at the deaths per capita average over the past 100 years, there were eras nearly 3x today! And if autonomous vehicles add value in aggregate because they support faster, more efficient, more relaxed shipping and travel, shouldn’t we tolerate more danger in return?
Besides wondering generally about the psychology of safety and autonomy, where we end up on this “how safe” spectrum will also influence two important factors: how quickly we attempt to move from 0% to 100% autonomous, and the role of insurance.
In terms of rollout velocity, it’s fairly noncontroversial to suggest that if every car was autonomous (and “talked” to each other continuously while all maintaining a shared or similar “safety” ruleset), the road would be safer than a 50/50 mix. Has anyone seen a graph that takes autonomous vehicles as a percentage of active vehicles and correlates against projected automotive accident rate? These forecasts will certainly be influencers to the regulatory and financial incentives that accelerate to autonomous density once the technology crosses mainstream viability. It’s a classic “it’ll happen slowly, then very quickly” dynamic.
The second question I have is about the role of insurance. As distasteful as it may seem, we do today have ways to value a life based on number of factors including age, race, vocation, geography and so on. The “how safe” question might be moot (excluding corporate negligence) if we’re comfortable with allowing insurance to fill the gap between desired safety and actual safety. That is, let’s say autonomous vehicles were twice as unsafe relative to today. Do we need as a society to wait until safety improves or can we use financial compensation as a mechanism to bridge the gap. And whose insurance? The driver, as it sits today. Or will the manufacturers (Tesla, GM, etc) need to carry liability insurance as a passthrough if it’s decided the driver wasn’t at fault in an accident but instead the algorithm was?