9 Comments

I appreciate this in-depth analysis of the robot driver vs human driver and the corporate public relations vs human perception of tragic events. I’m wondering what measures Cruise took after that horrific accident to avoid similar events in the future. May be raising the testing bar, before putting vehicles on the road? Needless to say, the other driverless vehicle makers should have taken notice too!

Expand full comment

“ And that’s a shame, because in the long term this will hurt the industry.” Arguably already has.

I suggest the industry look to the rapidly growing list of hazards identifiable as AV related (e.g. accelerating toward a pedestrian in a crosswalk, slamming into a crossing truck, etc.) and derive track tests that validate future AV safety in those cases. Comparison with human drivers in those identical track tests will support relative safety claims, and will not endanger the public by inconclusive unstructured mileage accumulation by potentially defective and unsafe AVs. NCAP and IIHS comparisons have accomplished safety comparisons for many features and models. The AV industry should welcome similar unbiased safety performance comparison with humans in controlled settings if it truly wants to convince rather than bullshit its way to public acceptance. What’s good for airbags would also be good for gasbags.

Expand full comment

Expand full comment

I have puzzled over this dilemma since the very earliest days. The challenge is, "What if the bar of never doing something a human thinks they could have avoided is too hard?" Too hard could even mean impossible, but let's just assume what I think most people assume, that it's quite a bit harder than the bar of positive risk balance.

What this means is that if you delay deployment until you reach that high bar, you forgo a staggeringly immense amount of general risk reduction. An absolutely staggering amount, you forgo a reduction of risk that would lead to the saving of probably tens of thousands of lives, millions of injuries. The math is clear and overwhelming, depending on how long a delay you envision. Yes, as you say, we don't see those prevented incidents, or even the ones that weren't prevented. We're not very good at that, and the public are not utilitarian about this math.

The job of the regulator is to improve overall road safety, not to react with the natural emotion to individual events. To resist the instinct of the public. Whether this can work is unclear, but don't be surprised if companies, who quite rationally don't see that they have a choice, they simply can't wait for that higher bar even if they sought it, don't be surprised if they hold out hope for getting regulators to accept the risk balance.

(And it's probably going to happen, for reasons nobody predicted, with Elon Musk having control of the reins of policy.)

I have advised both Waymo and Cruise on this and do indeed advise them to show their responsibility through transparency. But I also know it's difficult advice to take from a practical standpoint.

BTW, you have some significant factual errors in your analysis of the Cruise dragging, but they don't affect the question above too much.

Expand full comment

To be clear, I'm NOT arguing that regulators should adopt this standard for recalls. I'm pointing out that companies should take this effect into account when they think about how the public will react to their public-facing messaging and behaviors.

Expand full comment

Some substack bug not letting me post the more detailed reply. I broadly agree that the companies must consider this and have advised them to, particularly when I had specific advisory roles, but also in general. However, as we're talking about the public's reaction, this is hard to not conflate with regulatory action. The DMV has refused to state which of the two reasons they pulled the Cruise permit (cover-up or safety) mattered most, though most presume the former.

Expand full comment

Brad -- I would be happy to correct any and all factual errors in the Cruise pedestrian dragging mishap. The scholarly version here has specific citations for pretty much everything factual: https://arxiv.org/abs/2402.06046

So if you send me a list of what is wrong I'd be happy to do an update.

Expand full comment

If the Cruise-issued external report has significant factual errors that transferred into my paper then I'd request a pointer to public information telling a different story. As is stated in the paper the analysis assumes those reports are factually accurate, with an understanding that they likely tell the best possible side of things, and the gaping holes in some parts of the story suggest they were careful to avoid digging in places where there was bad news.

Expand full comment

I think PRB is still an important safety metric; for one thing it can indicate which way the system is trending. But I agree regarding what’s currently happening (everybody comparing to humans/themselves). And in particular I think when there are fatalities, the news headlines will blow up, and that will be a big risk for autonomous vehicle companies. Also, it does seem like transparency is a must.

Expand full comment