Monday, 18 September 2017
The voices advocating a transition to Self-Driving Vehicles / Autonomous Vehicles / robocars claim they will eliminate '1 million deaths per year'. I have been told there is a 'moral imperative' to use AI for driving because of this. However, as pointed out by @SafeSelfDrive on Twitter, robocars are not a response to user pull or a safety initiative. Robocar started at google, and the motivation for the initiative is somewhat unclear in this interview with Chris Urmson. I am reliably informed that everyone else is just reacting to Google. All in all, there was not an obvious case for this massive investment, despite the crowd now shouting for ending 1million deaths.
Nick Reed of TRL has an interesting piece on robocar safety, pointing out the difficulties of proof by testing. (Of course, test is only part of a safety critical system life cycle). He tells us that in the UK there are 180 million miles between fatal accidents. Vehicles in UK do about 324 billion miles a year (see here). People say they are unhappy with the current driving death toll, so what would be a better number? The EU has a strategic target of halving road fatalities, so let's use that i.e. fatality every 360 million miles. People distinguish voluntary risk (driving) from involuntary risk (being transported) by a factor of 1000, so the target for robocars is a fatality every 360 billion miles i.e. a bit less than one a year in UK. My uninformed guess is that this is the right order of magnitude.
A comparison with rail might help. People now travel about 40 billion miles by rail in the UK (a big increase over recent years). There has been 1 passenger fatality since 2006. Some crude arithmetic: 1 fatality per 10 years, and 40 billion miles p.a. gives us a fatality every 400 billion miles, which isn't so far off the robocar target.
In 2014, there were 315 fatalities on the rail network, 89% of which were suicides. It is important that the boundaries for robocar fatalities are set and monitored appropriately. John Adams has pointed out that, while car occupant fatalities have decreased, pedestrian and cyclist fatalities have increased.
Chris Urmson has this to say about safety criteria:
"But when we think about the rate at which bad things happen, they’re very low. So you know in America, somebody dies in a car accident about 1.15 times per 100 million miles. That’s like 10,000 years of an average person’s driving. So, let’s say the technology is pretty good but not that good. You know, someone dies once every 50 million miles. We’re going to have twice as many accidents and fatalities on the roads on average, but for any one individual they could go a lifetime, many lifetimes before they ever see that. So that experience with the technology and kind of becoming falsely comfortable with the safety of it is one of the challenges they face."
Talking about doubling the accident rate is rather different to the breathless hype from the million deaths a year crowd.
In a dazzling piece about driving in India, Alex Roy says:
"Because in the absence of a technical or regulatory definition of “safety”, manufacturers—who have invested billions in self-driving—will be forced to decide what level of self-driving is safe enough to bring to market, and market it.
The mobility industry and clickbait media supporting it are almost totally invested in the concept of the Zero Day, the day when self-driving cars reach a mystical tipping point and “take over the world,” which I also refer to as the Autonomotive Singularity. The truth is that their utopian, winner-takes-all narrative is no more than a velveteen vision of good intentions guided (and blinded) by ham-fisted profit."
The idea of manufacturers setting their own safety criteria based on marketing does not appeal to me one bit.
The right approach in the UK is, of course, an ALARP safety case with a good understanding of 'grossly disproportionate costs', supported by use of appropriate standards. A decent profile audit against Automotive SPiCE would help.
Posted by BrianSJ at 14:59