Is a Robotaxi Crash Really "Unavoidable"?
A crucial aspect of whether a crash could have been avoided is the contextual risk
We have recently seen two news items in which robotaxi companies are messaging, as they so often do, that a crash was physically unavoidable, and thus (it is implied) is not their fault. I’m not going to try to keep up with breaking developments in the investigations here, but rather point out that claims of a crash being “unavoidable” can ring hollow when considering how a careful and competent human driver would have reacted to the contextual risk.
Waymo & Zoox: A Tale of Two Mishaps
Consider two recent mishaps:
A Waymo AV striking a child in the area of a school (minor injury): Jan 23, 2026, https://static.nhtsa.gov/odi/inv/2026/INOA-PE26001-10005.pdf
From the NHTSA document: “NHTSA is aware that the incident occurred within two blocks of a Santa Monica, CA elementary school during normal school drop off hours; that there were other children, a crossing guard, and several double-parked vehicles in the vicinity; and that the child ran across the street from behind a double parked SUV towards the school and was struck by the Waymo AV.” A Waymo blog post says: “The event occurred when the pedestrian suddenly entered the roadway from behind a tall SUV, moving directly into our vehicle's path.”A Zoox AV striking an opening car door, perhaps causing a minor injury to the hand of the driver opening the door: Jan 17, 2026. The Zoox take is: “A robotaxi was traveling straight on 15th Street when the driver of a parked vehicle suddenly opened their door into the path of the robotaxi … The robotaxi identified the opening door and tried to avoid it but contact was unavoidable.”
In both cases the companies are making sure to say the conflict arose “suddenly” (emphasis mine in the quotes above). In both cases they are arguing that their computer drivers did the best they could, as limited by the laws of physics. Waymo is apparently arguing they did better than a human driver because they were able to partially brake before impact, and according to their mathematical model a human driver’s longer Perception-Response Time would have resulted in a higher-speed impact.
More than Twitch Reaction Speed
But here’s the thing: there is more to avoiding crashes than reacting quickly. The context of the crash matters. The comparison should be to a careful and competent human driver who is adjusting their driving behavior according to the risks apparent in a situation, not a robot driver that obliviously drives at the posted speed limit and jams on the brakes only when an imminent collision is directly observable.1
For Zoox, a question to ask is whether a careful and competent human driver would have expected an elevated probability of a door opening and changed behavior (reduced speed, guided left in the travel lane, etc.). A human driver might do so if someone has just finished parking in a busy shopping district and is likely to soon get out of the car. Or even just if someone is in the car at all. (If that is what happened. We don’t know. Zoox isn’t releasing video so we’ll probably never know what really happened.)
Because there was an injury for Waymo, we know via a required report to NHTSA that the injury happened in what was likely a chaotic school dropoff scenario. A reasonable driver should be exercising extra care in such a situation. (Waymo should have been even more careful and arguably should have avoided school scenarios entirely given the heat they are taking for school bus violations.) A young kid appearing from in front of one of several double-parked vehicles at school dropoff is not an exotic, unexpected event — it’s called a Friday at school dropoff. NHTSA will likely get to see the video, but we probably won’t in this case either.
The public-facing posture by both companies seems to be along the lines of the Waymo NIEON (Non-Impaired, with Eyes always on the cONflict) model, which is about whether a vehicle avoids a crash once it sees the conflicting object that presents a risk of collision. In the school child injury case, it measures reaction to a child “suddenly” appearing from behind a vehicle.2 But it does not take into account whether the computer should have been exercising an increased level of caution before the child became visible. That difference is a crucial gap between industry claims of any crash being “unavoidable” according to their narrowly constructed mathematical model vs. being unavoidable in real-world circumstances that warrant extra caution.
I’ve written about this topic in my discussion of embodied AI Negligence (section 7.4.2 of my book on Embodied AI Safety). There I specifically mention these conflict-based models, saying: “Such models are useful, but do not on their own address the full scope of avoiding negligent driving behavior related to a failure to properly react to high-risk context.”
Waymo Lessons Learned (?)
I have been saying for a while that claims of statistical safety are insufficient to garner public acceptance. The measuring rod of “better than a human driver” needs to be applied crash-by-crash to resonate with general public stakeholders.
NHTSA ODI is asking the right questions about the contextual factors of this particular crash according to their investigation document: “ODI has opened this Preliminary Evaluation to investigate whether the Waymo AV exercised appropriate caution given, among other things, its proximity to the elementary school during drop off hours, and the presence of young pedestrians and other potential vulnerable road users.”
The mere presence of school children in an obvious school dropoff scenario should have required Waymo — and any human driver — to exercise extra caution. The Waymo non-apology for the ongoing issue of driving past stopped school bus pickups/dropoffs of not having actually hit a school child (yet)3 is undermined by actually having hit a school child in this injury mishap.
I hope that robotaxi and other autonomous vehicle companies learn the right lessons here for the benefit of both road users and the long-term viability of the industry. Those lessons include, but go well beyond, avoiding Waymo’s tone-deaf pointing to a mathematical model-based brag that they didn’t hit the child as hard as someone else might have.4 And, while I should not have to say this, I’ll point out that messaging that the child (or perhaps the dropoff parent) is at fault for the child’s injury in this situation is unlikely to garner public sympathy.5
Waymo came so very close to a fatal crash with a highly vulnerable pedestrian a week ago. Despite what the Waymo co-CEO has said, their first clearly at-fault fatality represents an existential risk to that company, and perhaps the entire industry. I’m glad for the child, the child’s family, and Waymo that they all got lucky. But I’d feel better if there were visible signs that Waymo is taking to heart the lessons that need to be learned here. They need to pivot into a more proactive strategy of earning trust instead of deflecting blame.
Phil Koopman has been working on self-driving car safety for about 30 years, and embedded systems for even longer. For more on applying AI, see his new book: Embodied AI Safety.
We don’t know whether their vehicles actually adapted their behavior to the relevant situations, but it is reasonable to assume they are presenting the most favorable story they can tell, so there is no reason to give them credit beyond what they claim.
It is unclear how a pedestrian would plausibly appear from behind an SUV other than “suddenly.”
https://www.cbsnews.com/news/ntsb-investigation-waymo-robotaxis-passing-school-buses-austin-texas/ : Waymo quote: “"There have been no collisions in the events in question… ”. Their additional claim of statistical safety, that they “… are confident that our safety performance around school buses is superior to human drivers” is refuted by available data that suggests they are perhaps 10x worse than human drivers in those situations.
https://www.washingtonpost.com/technology/2026/01/29/waymo-autonomous-vehicle-crash/ : “Waymo said a human driver would probably have hit the child at a higher speed. That difference, the company said, is “a demonstration of the material safety benefit of the Waymo Driver.””
The use of the word “suddenly” in crash reports seems intended to cast blame on the victim in both the Waymo and Zoox mishaps. Even if that blame were to be proven legally, casting blame seldom improves safety. And if the child had taken a Waymo robotaxi to school and had been exiting that robotaxi when hit by another robotaxi, who would Waymo blame then?



Hey, great read as always, it's really important to constanly question these 'unavoidable' claims from AV companies and highlight the human driver standard for accountability.
Great analysis and discussion on those incidents. Thanks a lot!