Lots of ink has been spilled about US Autonomous Vehicle (AV) regulation arguing which is the “right” way to regulate safety and/or counter-arguing that any regulation whatsoever would somehow “stifle innovation.” But there is no single right way. And the US has frittered away years debating the false dilemma of picking the One True Regulatory Approach, while other countries have been busy making tangible progress in this area. We are getting left behind.
Now that we are seeing robotaxis scaling up on public roads it is time to get serious about this topic. A missing or dysfunctional regulatory approach will hurt the US more than it helps us. At some point it is important to set some ground rules rather than the current wild west approach we have clung to in the name of avoiding “stifling innovation.”
A four-layer process based on existing regulatory and legal mechanisms will let us implement increasingly more effective safety guardrails as the technology matures, without stifling innovation. Each layer provides more robust guardrails to buy time for the next layer to mature. Think of it as graduated sets of training wheels for safety oversight.
The layers are: (1) regulatory recalls, (2) tort liability reform, (3) regulatory rulemaking, and (4) product liability reform. One might come up with additional layers as well as other approaches that complement these layers. But this discussion is limited to mapping existing these four regulatory and liability mechanisms onto AV safety.
Layer 1: Regulatory recalls based on statistically problematic behaviors
We already have regulatory oversight of AVs in the form of recalls. After a particularly bad crash or a series of less severe incidents, NHTSA contacts the manufacturer. After a negotiation process that sometimes involves a formal investigation, the outcome can be a recall that is issued to fix that particular problem.
The good news is that this is already happening. The recall process lets NHTSA oversee the correction of proven driving behavior issues by monitoring that the company has pushed out a software update to fix it.
The bad news comes in four parts:
It only happens after something goes wrong, and often not until after one or more people have been hurt or killed.
There is no direct financial consequence for an OTA software update remedy. While there can be some indirect adverse consequences from a recall such as fleeting bad publicity, the threat of a potential recall places comparatively little pressure on manufacturers to avoid problems before they put cars on the road.
NHTSA does not vet the fix. A company can simply claim things are fixed while putting a band-aid on a much deeper issue. NHTSA then has to go back with another investigation. The process can take years.
Recalls are based on specific, correctable behavioral issues or component failures. Net risk and other aspects of safety are not typical criteria considered in determining a recall is needed.
Regulatory recalls are needed, but are an incomplete approach to providing safety. This is especially true in the US, where regulators have not established pre-release safety tests relevant to AVs. But even when type approval is in use as it is in Europe, regulatory recalls do not provide a framework for compensating victims.
Layer 2: Tort Liability based on a reasonable human driver standard
Consider a hypothetical robotaxi crash in which the computer driver runs through a red light, killing a pedestrian in a crosswalk. This happens exactly one time. There is no discernable reason for it to have happened, and no previous pattern of running red lights. The manufacturer tells NHTSA there was nothing in the vehicle that can be corrected to fix the problem, but rather is just a highly unusual statistical fluke inherent in the use of machine learning technology. Nothing is perfect, and it just happened. A recall is highly unlikely to be issued to fix something that can’t be fixed. And even if a recall were issued, it would not compensate the victim’s family for their loss.
Tort liability for human driving mishaps exists to (a) compensate victims of negligent driving behavior, (b) put financial pressure on drivers to avoid negligent driving behavior, and (c) provide a framework of rules for insurance compensation. We can provide additional safety guardrails for AVs by applying the same framework to AVs.
Currently, AV losses tend to be handled under product liability (see the next section). However, that approach is ill-suited to addressing compensating victims from garden-variety mishaps. Consider the above scenario involving running a red light. Should experts be hired and millions of dollars be spent to find a software defect responsible for running the red light before an injury victim can be compensated for negligent driving behavior? (If you think “yes,” then do you also support a functional MRI and psychotherapy as well as genetic analysis to determine why a human driver ran a red light as a requirement to holding them accountable if they were not drunk or otherwise impaired?)
And if experts can’t find a particular software defect that provably caused the mishap, does that mean we’re OK as a society with robotaxis killing pedestrians via running red lights with no consequence? We wouldn’t give a human driver a free pass for this behavior, and we shouldn’t do that for a computer driver. (This is tort law — we aren’t talking about jail time. We’re talking about whether the driver’s insurance should pay out compensation for the death, and whether the victim’s family has a right to sue for compensation for negligent driver behavior if they find the insurance payout inadequate.)
As for human drivers, an exemplary driving record does not change the fact that a specific driving misbehavior caused harm to someone else. Statistically saving lives should not give robotaxis license to violate traffic laws or otherwise drive recklessly with impunity.
Instead, we should establish a duty of care for computer drivers equal to that of human drivers. If a jury would have found a human driver negligent for behaving a particular way in a particular situation, then a computer driver should similarly be found negligent. Beyond that, since computers are not legal persons, the manufacturer should be held responsible for any liability. (The manufacturer is the entity in a position to have avoided the problematic behavior, not the vehicle owner or fleet operator. For complicated cases, the manufacturer should pay out and resolve any contributions to negligence from other parties without dragging the victim into that mess.)
Achieving this outcome requires establishing a duty of care for a computer driver equivalent to the duty imposed on a human driver in comparable circumstances, with the manufacturer acting as the responsible party for breaches of that duty of care. This sets a safety expectation on a crash-by-crash basis. If a reasonable human driver would have avoided a crash, the computer driver should have avoided it as well.
This is not a requirement for AVs to be perfect. There are plenty of crashes in which there is no tort liability. Rather, this approach is simply holding computer drivers to the same negligence standards we apply to human drivers, using the same laws, same rules, same jury trial process, same insurance processes, and so on.
This layer should handle the vast majority of loss events with an efficient, well-established process. Promoting every AV crash to a product liability lawsuit makes no sense from an efficiency point of view and also a justice point of view.
For those who argue that insurance alone — without the possibility of tort liability court proceedings — will provide a safety guardrail, keep in mind that auto insurance payouts are based on underlying tort liability rules. The idea of requiring insurance is to promote efficient settlements under tort law rules while ensuring some minimum amount of compensation is available to injured parties. Any time the insurance amount is exceeded by a mishap the fallback is the tort liability system. So think of insurance as an efficient shortcut to handle the “easy” cases, and not a replacement for tort liability.
Some might propose universal no-fault insurance to address the liability issue by, in essence, saying there is no liability for an AV crash. There is a pool of money that pays out for losses and is paid into by insurance premiums, and that’s it. There are two issues with this. The first is that in practice it is likely to under-pay for loss events, transferring a residual burden onto victims. (The likely mechanisms for this outcome amount to the reasons why mandatory arbitration tends to be bad for consumers.) The second issue is that pressure for safety on manufacturers is greatly reduced. Insurance can provide a bit of pressure to help with safety, but only when backed by a robust tort law system. Insurance costs alone are simply too weak an incentive to produce acceptable safety. And additional consideration from the point of view of this layered framework, is that no-fault insurance fails to achieve the objective of coupling financial consequences from the specifics of an individual mishap by instead taking a statistical approach.
It is important to note that tort liability is concerned with the specifics of a particular crash and not with net statistical safety outcomes. It is possible for an AV design to have ten times the crash rate of a human driver and not yet be liable for any of the crashes. So this is another partial safety guardrail rather than a definitive solution. It is, nonetheless, a crucial layer to provide an avenue for victim compensation without requiring the expense and complexity of a product liability regime.
Layer 3: Regulatory rulemaking based on lessons learned
Over time, lessons will be learned about equipment design practices that are effective in improving AV safety. These might run the gamut including visual indicators to express motion intent to other road users, requirements for behaviors that amount to a sort of driving test, requirements for emergency response behaviors, mandatory equipment malfunction responses, and so on. Over a period of a decade or so we can expect to see these gradually incorporated into the US Federal Motor Vehicle Safety Standards (FMVSS), New Car Assessment Program (NCAP), and their European counterparts. These will long-term help capture lessons learned. But they are a bare minimum to promote specific aspects of safety, and are not intended to fully ensure safety on their own. And this process will easily take a decade or three to play out.
We might eventually see required conformance to industry safety standards such as ISO 26262, ISO 21448, ANSI/UL 4600, ISO/TS 5083, ISO/PAS 8800, and so on. But don’t hold your breath, because the industry has spent decades fighting against process-based requirements to supplement vehicle-level testing.
We should never lose sight of the fact that these rulemakings will eventually happen. However, Layers 1 & 2 serve to put safety guardrails into place while the decades-long regulatory rulemaking process grinds along.
Even when regulatory rules are in place, those will not ensure statistical safety. Rather, they will look more like a proactive recall-in-advance strategy. In essence they tell manufacturers that specific tests must be passed, because if they fail that is an automatic recall trigger regardless of statistical road outcomes.
Layer 4: Product Liability based on statistical comparability to human drivers
There will be times when a tort liability approach is insufficient to put pressure on manufacturers to achieve acceptable safety. Moreover, recall and test-centric regulatory approaches deal with specific behaviors and safety mechanisms rather than net outcomes. How will we assure that AVs are at least as safe as human drivers in the long run?
One plan is to hope that a combination of regulatory pressure to avoid “unreasonable risk” that provokes recalls plus the threat of tort law payouts will be enough. But this is unlikely to be the case. There is simply too much money to be made, and it is too cost-effective to settle lawsuits and play the regulatory recall game. Layers #1-3 will not be fully sufficient incentives to achieve a degree of safety that other stakeholders are likely to find acceptable. Business incentives are not aligned with societally acceptable safety, but rather with the cost of losses being tolerable given profits being made from operating robotaxis and robotrucks. Being able to afford paying out for harm and buying insurance does not make any particular activity objectively safe, and this is true for AV safety as well.
We could simply follow the current plan of believing the “Trust us bro, we’re saving lives!” propaganda from the AV industry. And maybe it will work out in the end. But maybe it won’t. And even if it works out for some companies, it probably won’t work out for other companies.
We need a Plan B to put pressure on companies to end up in the right place, which includes at the very least actually being safer than a human driver. (Contrary to often-reported claims, we still have no idea if robotaxis will be safer than human drivers for fatalities. Injury data being published seems promising, but we are still a billion miles short of knowing how things will turn out. And have no way of knowing whether the safety record will change sometime in the future under profitability pressure or other business realities. And there is no reason to believe that one company’s safety record will predict safety outcomes for another company.)
Plan B is product liability.
Traditional product liability should still be available as a back-stop to tort liability and regulatory mechanisms. However, it will be difficult to apply in practice. Analyzing complex system designs is insanely expensive. Just trying to find a smoking gun in a complex conventional software system is a Herculean task. For machine learning-based systems it might be nearly impossible. We should leave this avenue in place, but we need something more.
Traditional product liability should be revisited to address the biggest potential issue for AVs. What if they drive competently, but suffer higher fatality rates than human drivers for no specific, fixable reason? What if they just have poor judgment in unusual circumstances?
The machine learning-based technology used to build AVs is famously lacking in what might be called “common sense.” More specifically, it tends to fail spectacularly when presented with rare, but high-consequence situations it has not been trained on. What if those end up resulting in a higher fatality rate than human drivers? And what if those higher fatality rates disproportionately harm road users who do not even use the technology, such as pedestrians?
We propose that product liability should gain an additional grounds for complaints in the form of violating societal stakeholder expectations for safety. The expected source of evidence should not be engineering analysis of technical designs, but rather statistical analysis of harmful outcomes. Think of this as the AV version of looking for harmful effects of pharmaceuticals, or harm done by exposure to carcinogens. Mechanisms for harm might be informative, but a statistically compelling correlation should be enough to prove the point. It might be the case that some particular AV design is operating as expected, no specific way to make them better is known, and yet they impose net too much harm on some group of people. (There will need to be additional thought to how to bring class action mechanisms to bear for a robotaxi that might have weekly software updates, but that is another discussion.)
This would be an opportunity to have a check-and-balance that addresses the statistical aspects of safety. An AV might have product liability exposure for one of the following characteristics:
Significantly higher fatality or severe harm rates than comparable human drivers
Significantly higher fatality or severe harm rates than other computer drivers, perhaps attributable to a design choice (e.g., deciding to avoid use of lidar to cut costs, or an over-reliance on HD maps that turn out to have too-high defect rates)
Disproportionate harm to identifiable population segments, especially vulnerable populations such as pedestrians or children
Other statistical measures of harm that are problematic, but not readily addressable via regulatory or tort law mechanisms.
Longer term, as the industry matures, product liability might also include failure to keep up with the state of the art of safety. For example, if a particular brand of AV has crashes in a hazardous driving situation that other major brands of AVs consistently avoid, that might be deemed a product defect, even if the AV was not “at fault” from a driving negligence point of view. This ends up being an adaptation of the current product liability consideration of whether a cost-effective alternative design is available that mitigates a known hazard, but was not used in a particular product design.
Expanding product liability law to address statistical harms by robotaxis might be a heavy lift. But it seems the only existing mechanism that is aligned with the need for accountability of fleet-level safety outcomes.
Layered guard rails and trust
The regulatory layers are already in progress. However, rulemaking will take decades to play out.
The tort litigation layer is our best hope for a comparatively near-term way to put reasonable safety guard rails in place for an industry that regularly outstrips the ability of regulators to keep up. We need something sooner rather than later to provide safety guard rails while regulators can progress to rulemaking. Tort law reform along the principle of assigning a duty of care to computer drivers is the most obvious way to do this.
Long term, we expect that product liability will have to evolve. This will not be quick, and it will not be simple. There are no doubt competing ideas that might also work out. Consider layer 4 of this essay a speculative proposal. But we need to start the discussion now, so that the topic can mature as AV fleet sizes grow.
Assuming the AV industry prospers, they will be forced down this layered path, or something similar, whether they like it or not. For now the AV industry fights this path every step of the way. But we feel that embracing and guiding the path would be a more socially responsible way to go, and ultimately help long-term adoption of the technology as well.
This post is a draft preview of a section of my new book that will be published in 2025.
Thank you Phil.
I think that your layered construct provides needed clarity for AV safety establishment. It may need support from a complementary regulatory Layer 0 that comprises down and dirty minimum design requirements.
Layer 0 should be based on current knowledge and heritage from other technologies, perhaps needed to provide the foundation for the other Layers. For example, the performance of any AV should be inspectable (to government officials or fleet operators/individual owners) both for conformance to mechanical/visibility/lighting/annunciator status like current vehicles but also for conformance to design intent for logical safety- or life-critical functionality, e.g. inspectable records of whether operational deviation from planned trajectories conforms to the control system’s design requirements. The current practice of data hide and seek cannot be allowed, and should be excluded at Layer 0, and that would support Layers 1-4. This transparency would be needed to underlie your Layer 1, particularly where safe AV operation might be on a knife’s edge that is detectable before someone is killed by auditing actual performance against design limits.
Another Layer 0 parameter might be conformation that AV design validation encompasses known instances of AV failures that resulted in injury or death. (Clearly, Waymo did not learn from Cruise.)
Another Layer 0 example could be redundancy standards for safety- and life-critical logic. For example all AV occupant safety-critical logic-driven functionality is (perhaps) single fault tolerant within the ODD, all life-critical logic-driven functionality is (perhaps) dual fault tolerant within the ODD, and safety-critical functionality with regard to vulnerable road users (perhaps) 3 fault tolerant.
As an aside, statistical arguments for AV safety should include estimated parameter uncertainty compared with the uncertainty of the comparable human-driven uncertainty, with no credit given for benevolent intent.
On point, technically astute, and sadly impractical. Implementation of even a subset of these suggestions would require a government that cares about the welfare of people who live here, and that is definitely not the case right now. (Recommended search terms: measles, severe weather warning, Boeing, Chevron deference.)