A Plan for Trustworthy Robotaxi Messaging
The current "we're busy saving lives" strategy hurts more than it helps
Promising “saving lives” is a really bad plan for robotaxi industry messaging. Unpacking this, the promise is that someday the industry thinks it will save lives, so we should put up with less safe behavior in the near-term. That includes ignoring photos and videos of misbehaving robotaxis. It also includes ignoring the risk imposed on road users who did not sign up to be passengers in vehicles that are still learning how to drive.
The industry would do better to earn trust via promising and delivering on responsible behavior from both the vehicle and the manufacturer.

Robotaxi technical and business plans must be aligned with achieving long-term acceptable risk, including not just average risk, but also absence of unreasonable risk, avoidance of risk transfer, absence of negligence, and all the other topics described elsewhere. But the way to communicate progress toward achieving that goal needs to be more grounded in demonstrable near-term results than a promise of someday – perhaps – saving lives.[1]
If “we’re saving lives” is the wrong messaging strategy to build trust, what is the right one? An alternative strategy must be both realistic and more centered on building justifiable trust over time.
Nobody can prove beyond doubt whether a particular robotaxi is safe while it starts long-term mileage accumulation on public roads.[2] But every adverse event that turns up in the news builds suggestive evidence that a robotaxi design is unsafe. There is a palpable risk that too much negative attention will accumulate before the long-term outcomes showing acceptable safety give reasonable confidence.
Our idea to address this dilemma is to promise responsible behavior from both the car and the manufacturer while the technology is maturing. During the waiting period for long-term outcomes, both robotaxis and the companies who make them should be seen to be conspicuously good actors. That does not mean perfection, but rather transparent behaviors that show good faith attempts at safe deployment as well as continuous improvement.
For robotaxis themselves, the goal should be no incidents that could reasonably have been avoided by an unimpaired, competent human driver. This includes not only loss events, but also driving behaviors that are dangerous, illegal, or would be considered stupid if a human driver would have done them – even if no harm is actually done. This is likely to mean an extended duration of in-vehicle safety drivers, followed by an extended duration of remote telepresence advisors.[3] Each phase of human driver removal should take place only after the arrival rate of incidents requiring driver intervention is effectively zero. There should be high confidence that any residual incident arrivals will be handled without a loss event or major embarrassment before the next stage of human oversight is removed. Human drivers should be reinstated for each expansion of operational parameters (new geographic area, new environmental conditions, etc.).
Zero reasonably avoidable incidents is likely a much higher bar to set than being no worse than a typical human driver. In part, that is because human drivers occasionally make high-consequence mistakes, and most mistakes of both human drivers and robotaxis have a low probability of resulting in a loss event. However, this level of aspirational performance is required to avoid negative news cycles and make a claim that robotaxis will indeed be safer than human-driving vehicles to non-technical audiences.
Achieving this goal on public roads means that robotaxi safety drivers cannot be removed until there is high confidence that all incidents known at the time of release have been acceptably mitigated. In this case, “acceptable” might mean not only a low probability of a loss event occurring, but also a low probability of an adverse social media or news event related to a dangerous or embarrassing vehicle behavior that is made public.[4] Repeats of the same type of incident will be even worse.
If the sales pitch is that robotaxis will be safer because they do not make the same dangerous mistakes that human drivers make, then robotaxis had better not be caught making comparably stupid mistakes often enough to matter.
Even though the aspirational goal is mistake-free driving, robotaxis will nonetheless make mistakes. In part that is because no technical system is perfect. But an even bigger part is that for years (likely decades) there will be a continual discovery process of newly revealed heavy-tail edge cases that were not accounted for in the current system design. That means, as a practical matter, we can only approximate the goal of incident-free operation.
It is tempting for the industry to deploy an opacity playbook when incidents do occur. Typical tactics vary among companies, but include hoping nobody notices incidents, blaming other road users, blaming poor municipal infrastructure, claiming that each incident should not count because it is “rare,” and citing whatever aggregate operational statistics might be favorable at the time. There can also be ad hominem attacks against anyone reporting an incident, including: public shaming of someone reporting an incident for undermining “the mission,” ridicule of someone reporting something “minor” for pearl-clutching, accusations of being a hater, accusations of promoting social media click-bait, accusations of stifling innovation (and therefore killing people due to slowing down improved safety), accusations of seeking profit from short selling of stock, accusations of having a personal vendetta against the company, and other accusations of bias against the manufacturer or its executives for <reasons>. These attacks can, at times, escalate to attempts at character assassination and attempts to provoke disciplinary actions from the target’s employer.[5]
While the opacity and deflection playbook can be effective in early stages of deploying technology, eventually it becomes apparent what is really happening.[6] The result can be polarization of public opinion into two types: true believers who are resistant to the influence of narratives counter to their beliefs (whether for or against the safety expectations of a particular manufacturer), and those who decide to ignore the continuous spates of claims and counter-claims and build beliefs based on what catches their attention – which might be pictures and videos of misbehaving robotaxis they see on news clips and social media. To be sure there are those in the middle, but they can be few in number.
While building a robust messaging campaign that earns and maintains long-term public trust is a significant effort, we believe it can – and should – be done by including the following elements:
Tell people what to expect. When you say your vehicles are safe, what exactly do you mean by that? What are the limitations of the technology and its expected operational environment? What should people do, and not do, to help improve safety for everyone around the technology? If you tell people what to expect, they won’t be shocked when it has limits that you already told them are present.
Disclosing incidents instead of trying to hide them. This allows the company to control the narrative. However, it is important to give a clinically accurate disclosure instead of attempting PR spin. If this is not done, revealing even a minor unreported incident becomes newsworthy, as does catching the company in a material misrepresentation of an incident.
Acknowledging that incidents take place, paired with tracking to show that the root cause of each incident has been resolved. For example, showing video of a test demonstrating that especially risky incidents will be avoided as the result of a software update.
Acknowledging proportional responsibility for incidents. This might take the form of “the other road user was at fault, but we could have done better too,” again followed by video showing the improved behavior.
Publicly tracking progress toward resolving problematic behaviors. For example, tracking frequency and duration of road blockages to show improvement over time if those have escalated to the point they are attracting public attention. Again, attempting PR spin will bite back in the long run.
The natural reaction of public relations staff is to avoid mention of problems, often termed as avoiding starting a fresh news cycle about a problem. However, a transparency approach means intentionally starting a fresh news cycle as soon as possible showing that a problem has been resolved in a conspicuous display of transparency.
The key idea to this public relations approach is that companies already argue that they should be given some slack for misadventures because they are busy teaching their computer drivers to be better, safer drivers. So they should be conspicuously transparent about progress in that area rather than be seen to attempt cover-ups and stonewalling of queries about what their vehicles are really doing on public roads.
This piece is a section of a book work-in-progress that has not yet been through final editing. Look for this upcoming book sometime in 2025.
[1] If a company is lucky enough for long enough it might be able to get away with a strategy of opacity and “we’re saving lives.” But even if it is true that they will ultimately be proven right about that, it only takes one break of bad luck to have a public-facing events that causes an existential risk to the company. Just ask Cruise how that worked out for them. (Except you can’t, because they’re gone.)
[2] Arguably full conformance to industry safety standards would be a good proxy for a reasonable expectation of safety. A few companies say this is their approach. We shall see how well they are able to execute on that promise.
[3] We do not expect remote monitors to be able to jump in for highly dynamic “saves” of incidents in real time. However, remote humans can contribute to better handling of edge cases when the computer driver technology is good enough to convert surprises from highly dangerous situations into situations that require advice that can take a few tens of seconds to be returned from a remote operator. See this posting for the safety implications of remote operation.
[4] Why do we care about embarrassing behaviors that do not harm people, such as getting stuck in freshly poured concrete? Because for a long time there will be insufficient data to claim actual safety, so the system will be judged on the appearance of safety. Stupid driving mistakes undermine the credibility of claims that higher-stakes driving situations will be handled safely.
[5] Been there; done that, on the receiving end.
[6] In essence, the playbook amounts to a DARVO strategy (Deny, Attack, Reverse Victim/Offender). See: https://en.wikipedia.org/wiki/DARVO
I've disagreed with you before Phil but this one is off the charts. Companies like Waymo have had independent auditors report one liability incident per 2.3M miles, which is much better than the human record. Just what bar do you want to hold them to? Even at that level, you are going to get incidents. You are not going to get perfection. Ever. You're going to get incidents which look stupid in the press, incidents which a human wouldn't have done. That's because these are not humans, their error patterns will not match humans. Unless you can prove their premise of being able to be safer wrong (and it's not wrong, so you can't prove that) you hold them to a standard that is either impossible or greatly delays the deployment of the future's large, scaled fleets with high safety performance which do greatly reduce risk on the roads. Because if you delay this, you don't simply remove the risk of badly behaving robots, you replace it with the well known and much higher risk of the full range of humans, from drunks to professionals. Without question you cause a major, major increase of risk on our roads. That can't possibly be your goal, and yet it is what you seem to demand.
So what standard do you hold them to? It can't be perfection. It's unlikely it can even be "never make a mistake a human would not." (Even humans can't promise they won't make unusual mistakes that other humans don't make.) So what is it? And then do the math. Consider the delay you are asking for. Calculate the additional road risk in the future, when a safe fleet is deployed at scale. Calculate the risk you deem too much from tiny fleets today that might not meet your bar. Tell us the result of this calculation, any way you want to quantify the risk.
Thank you.
Another perspective is that there is nothing an AV can do that a human can’t, but there are many hazards they create that are not present in human-driven vehicles. Every added or novel hazard associated with their machinery-based operations needs to be proven inconsequential before exposing the public to those AV-unique hazards.
The public needs to be protected from those AV hazards. The public is protected from the hazards at construction sites and airports by fences. Workers are protected from the machinery and process hazards in factories by machine design, regulations, interlocks and barriers. AVs operating in public are lethal hazards to motorists and pedestrians alike. Even if they are someday proven equally conforming to regulations and the unwritten rules of the road in place for humans, the public still needs protection from the hazards unique to AV operation. It is unconscionable that they are allowed to endanger the public without the same kinds of protections that all other dangerous machinery requires.
Safety comes first. Since AVs don’t do anything that a human cannot do equally well, society will not suffer if developers are required to put adequate safety protections in place. How? Another puzzle for them. It public safety should not suffer because of AV developer’s target return on investment. Those threats are private.