I've disagreed with you before Phil but this one is off the charts. Companies like Waymo have had independent auditors report one liability incident per 2.3M miles, which is much better than the human record. Just what bar do you want to hold them to? Even at that level, you are going to get incidents. You are not going to get perfection. Ever. You're going to get incidents which look stupid in the press, incidents which a human wouldn't have done. That's because these are not humans, their error patterns will not match humans. Unless you can prove their premise of being able to be safer wrong (and it's not wrong, so you can't prove that) you hold them to a standard that is either impossible or greatly delays the deployment of the future's large, scaled fleets with high safety performance which do greatly reduce risk on the roads. Because if you delay this, you don't simply remove the risk of badly behaving robots, you replace it with the well known and much higher risk of the full range of humans, from drunks to professionals. Without question you cause a major, major increase of risk on our roads. That can't possibly be your goal, and yet it is what you seem to demand.
So what standard do you hold them to? It can't be perfection. It's unlikely it can even be "never make a mistake a human would not." (Even humans can't promise they won't make unusual mistakes that other humans don't make.) So what is it? And then do the math. Consider the delay you are asking for. Calculate the additional road risk in the future, when a safe fleet is deployed at scale. Calculate the risk you deem too much from tiny fleets today that might not meet your bar. Tell us the result of this calculation, any way you want to quantify the risk.
It's not me you or they have to convince, it is the public. Quoted headline: "AAA: Fear in Self-Driving Vehicles Persists: Drivers say they want better vehicle safety systems over self-driving cars"
I'm offering a proposal to address that problem. What's your counter-proposal?
Ultimately there are aspects of safety that cannot be ignored. The standard for safety they will have to meet in the long term is documented here. Ignore it at the industry's peril. A single quantified number does not get the job done:
No, it's not a single number and they are not unaware. Two companies have had a serious incident, both companies are gone. But the sort of perfection you suggest is either not doable or takes an immense amount of time -- and far more than 2 companies are gone because it was taking too much time and money. What we want is a reasonable, attainable standard that reduces risk on our roads. Which is what the remaining companies (Waymo, Zoox, Nuro, Motional, May and the trucking firms) are trying to do. You are free to defend the public's irrationality here but it does not advance the cause of road safety. As I am sure you know, the trust numbers in the AAA study are very different among those who have actually ridden in a Waymo. It's as expected that people fear what they don't know.
Another perspective is that there is nothing an AV can do that a human can’t, but there are many hazards they create that are not present in human-driven vehicles. Every added or novel hazard associated with their machinery-based operations needs to be proven inconsequential before exposing the public to those AV-unique hazards.
The public needs to be protected from those AV hazards. The public is protected from the hazards at construction sites and airports by fences. Workers are protected from the machinery and process hazards in factories by machine design, regulations, interlocks and barriers. AVs operating in public are lethal hazards to motorists and pedestrians alike. Even if they are someday proven equally conforming to regulations and the unwritten rules of the road in place for humans, the public still needs protection from the hazards unique to AV operation. It is unconscionable that they are allowed to endanger the public without the same kinds of protections that all other dangerous machinery requires.
Safety comes first. Since AVs don’t do anything that a human cannot do equally well, society will not suffer if developers are required to put adequate safety protections in place. How? Another puzzle for them. It public safety should not suffer because of AV developer’s target return on investment. Those threats are private.
Phil, I appreciate your work, but I find this baffling.
You write: "Promising “saving lives” is a really bad plan for robotaxi industry messaging. Unpacking this, the promise is that someday the industry thinks it will save lives, so we should put up with less safe behavior in the near-term. That includes ignoring photos and videos of misbehaving robotaxis. It also includes ignoring the risk imposed on road users who did not sign up to be passengers in vehicles that are still learning how to drive."
What less-safe behaviour are we currently putting up with that we should not? What photos and videos of misbehaviour are we being pressured to ignore? What incidents suggest that we are tolerating a level of risk that we should not?
You're clearly armed for battle, but I can't tell who your opponent is.
I love the idea of "student driver" on the plate! Seriously, companies ane PR folks should know by now that covering up facts -- or not telling the whole truth -- can not only prolong the "negative media news cycle," but also lose the confidence in the companies themselves. Reporters are not technologists. We judge the companies by their behaviors.
Another great contribution to the conversation - thanks. However, correct me if I’m wrong, you seem to start from the assumption that the people at risk are passengers in robotaxis. They are the ones that need usable information to allow them to decide if they want to ride in a particular robotaxi. But that ability to make a choice is not available to other road users - pedestrians, cyclists, other car drivers, school buses, emergency responders, etc. They have no way of controlling or mitigating their risk, other than to stay away from the areas where operation is allowed. How do you address the rights of these people to make informed decisions about their personal safety?
Douglas, thanks for the comment. First, let me level-set. I agree that other road users (especially vulnerable road users) outside the robotaxis are the ones we should be most worried about. They did not consent (whether properly informed or not) to be part of the experiment by summoning a robotaxi so they could take a ride. The only informed decision non-passengers can make is, if they feel unsafe, go the other way if possible when they see one. It's not always possible. I personally do not walk in front of a robotaxi at an intersection regardless of the signal colors. I wish FSD (anything possible of automated stop/go automation) had warning lights mandated so I could avoid them as well, but they don't. Others might choose differently.
I took another look at the essay, and I don't see anywhere I am limiting the discussion to in-vehicle safety. Incidents and loss events involve any crash or anyone getting hurt, not just passengers. Is there something that stood out to you? Certainly that limitation was not intended. Thanks for letting me know even if it is just a vibe thing so I an address it in later writing.
This is ultimately a playbook suggestion for the car companies. What others should be doing is out of scope for this essay, but is something I discuss in other places, such as my work on tort reform approaches with Prof. Widen.
Thanks for the thoughtful response. I also re-read your essay and, you're right, it's not limited to in-vehicle safety.
In my defense I was probably reacting to the fact that the messages that are coming from the Robotaxi crowd are largely aimed at getting people to use them, without which they have no business.
But, again in my defense, the two audiences (users and non-users) need to be treated differently. One is making an informed decision (although I think it's a stretch to say that given the complexity of the technology involved) and presumably gets some benefit from the service. The other has not made any choice and may, in many situations, be running risks they are entirely unaware of. Even if you can see it's a robotaxi, do you understand the risks that they present? And then, as you mention, what about Tesla drivers and others relying on so-called FSD or AutoPilot. We simply have no way of knowing at this point who or what is in control of any given vehicle at this point.
So my conclusion is that the two audiences are measuring and exposing themselves to risk in entirely different ways and need to be considered separately.
I've disagreed with you before Phil but this one is off the charts. Companies like Waymo have had independent auditors report one liability incident per 2.3M miles, which is much better than the human record. Just what bar do you want to hold them to? Even at that level, you are going to get incidents. You are not going to get perfection. Ever. You're going to get incidents which look stupid in the press, incidents which a human wouldn't have done. That's because these are not humans, their error patterns will not match humans. Unless you can prove their premise of being able to be safer wrong (and it's not wrong, so you can't prove that) you hold them to a standard that is either impossible or greatly delays the deployment of the future's large, scaled fleets with high safety performance which do greatly reduce risk on the roads. Because if you delay this, you don't simply remove the risk of badly behaving robots, you replace it with the well known and much higher risk of the full range of humans, from drunks to professionals. Without question you cause a major, major increase of risk on our roads. That can't possibly be your goal, and yet it is what you seem to demand.
So what standard do you hold them to? It can't be perfection. It's unlikely it can even be "never make a mistake a human would not." (Even humans can't promise they won't make unusual mistakes that other humans don't make.) So what is it? And then do the math. Consider the delay you are asking for. Calculate the additional road risk in the future, when a safe fleet is deployed at scale. Calculate the risk you deem too much from tiny fleets today that might not meet your bar. Tell us the result of this calculation, any way you want to quantify the risk.
It's not me you or they have to convince, it is the public. Quoted headline: "AAA: Fear in Self-Driving Vehicles Persists: Drivers say they want better vehicle safety systems over self-driving cars"
https://newsroom.aaa.com/2025/02/aaa-fear-in-self-driving-vehicles-persists/
I'm offering a proposal to address that problem. What's your counter-proposal?
Ultimately there are aspects of safety that cannot be ignored. The standard for safety they will have to meet in the long term is documented here. Ignore it at the industry's peril. A single quantified number does not get the job done:
https://philkoopman.substack.com/p/keynote-talk-understanding-self-driving
No, it's not a single number and they are not unaware. Two companies have had a serious incident, both companies are gone. But the sort of perfection you suggest is either not doable or takes an immense amount of time -- and far more than 2 companies are gone because it was taking too much time and money. What we want is a reasonable, attainable standard that reduces risk on our roads. Which is what the remaining companies (Waymo, Zoox, Nuro, Motional, May and the trucking firms) are trying to do. You are free to defend the public's irrationality here but it does not advance the cause of road safety. As I am sure you know, the trust numbers in the AAA study are very different among those who have actually ridden in a Waymo. It's as expected that people fear what they don't know.
Thank you.
Another perspective is that there is nothing an AV can do that a human can’t, but there are many hazards they create that are not present in human-driven vehicles. Every added or novel hazard associated with their machinery-based operations needs to be proven inconsequential before exposing the public to those AV-unique hazards.
The public needs to be protected from those AV hazards. The public is protected from the hazards at construction sites and airports by fences. Workers are protected from the machinery and process hazards in factories by machine design, regulations, interlocks and barriers. AVs operating in public are lethal hazards to motorists and pedestrians alike. Even if they are someday proven equally conforming to regulations and the unwritten rules of the road in place for humans, the public still needs protection from the hazards unique to AV operation. It is unconscionable that they are allowed to endanger the public without the same kinds of protections that all other dangerous machinery requires.
Safety comes first. Since AVs don’t do anything that a human cannot do equally well, society will not suffer if developers are required to put adequate safety protections in place. How? Another puzzle for them. It public safety should not suffer because of AV developer’s target return on investment. Those threats are private.
Phil, I appreciate your work, but I find this baffling.
You write: "Promising “saving lives” is a really bad plan for robotaxi industry messaging. Unpacking this, the promise is that someday the industry thinks it will save lives, so we should put up with less safe behavior in the near-term. That includes ignoring photos and videos of misbehaving robotaxis. It also includes ignoring the risk imposed on road users who did not sign up to be passengers in vehicles that are still learning how to drive."
What less-safe behaviour are we currently putting up with that we should not? What photos and videos of misbehaviour are we being pressured to ignore? What incidents suggest that we are tolerating a level of risk that we should not?
You're clearly armed for battle, but I can't tell who your opponent is.
Andrew, I would recommend starting with the materials here:
https://philkoopman.substack.com/p/keynote-talk-understanding-self-driving
I love the idea of "student driver" on the plate! Seriously, companies ane PR folks should know by now that covering up facts -- or not telling the whole truth -- can not only prolong the "negative media news cycle," but also lose the confidence in the companies themselves. Reporters are not technologists. We judge the companies by their behaviors.
Hi Phil
Another great contribution to the conversation - thanks. However, correct me if I’m wrong, you seem to start from the assumption that the people at risk are passengers in robotaxis. They are the ones that need usable information to allow them to decide if they want to ride in a particular robotaxi. But that ability to make a choice is not available to other road users - pedestrians, cyclists, other car drivers, school buses, emergency responders, etc. They have no way of controlling or mitigating their risk, other than to stay away from the areas where operation is allowed. How do you address the rights of these people to make informed decisions about their personal safety?
Douglas, thanks for the comment. First, let me level-set. I agree that other road users (especially vulnerable road users) outside the robotaxis are the ones we should be most worried about. They did not consent (whether properly informed or not) to be part of the experiment by summoning a robotaxi so they could take a ride. The only informed decision non-passengers can make is, if they feel unsafe, go the other way if possible when they see one. It's not always possible. I personally do not walk in front of a robotaxi at an intersection regardless of the signal colors. I wish FSD (anything possible of automated stop/go automation) had warning lights mandated so I could avoid them as well, but they don't. Others might choose differently.
I took another look at the essay, and I don't see anywhere I am limiting the discussion to in-vehicle safety. Incidents and loss events involve any crash or anyone getting hurt, not just passengers. Is there something that stood out to you? Certainly that limitation was not intended. Thanks for letting me know even if it is just a vibe thing so I an address it in later writing.
This is ultimately a playbook suggestion for the car companies. What others should be doing is out of scope for this essay, but is something I discuss in other places, such as my work on tort reform approaches with Prof. Widen.
Hi Phil
Thanks for the thoughtful response. I also re-read your essay and, you're right, it's not limited to in-vehicle safety.
In my defense I was probably reacting to the fact that the messages that are coming from the Robotaxi crowd are largely aimed at getting people to use them, without which they have no business.
But, again in my defense, the two audiences (users and non-users) need to be treated differently. One is making an informed decision (although I think it's a stretch to say that given the complexity of the technology involved) and presumably gets some benefit from the service. The other has not made any choice and may, in many situations, be running risks they are entirely unaware of. Even if you can see it's a robotaxi, do you understand the risks that they present? And then, as you mention, what about Tesla drivers and others relying on so-called FSD or AutoPilot. We simply have no way of knowing at this point who or what is in control of any given vehicle at this point.
So my conclusion is that the two audiences are measuring and exposing themselves to risk in entirely different ways and need to be considered separately.
Thanks again for your work and writing!
Doug
Doug -- great points! Thanks for sharing them.