What should expectations be for autonomous vehicle road behavior?
Should an AV adhere to road rules, have expert driving behavior, or simply be "reasonable"?
What should be the rubric for deciding that a robotaxi or robotruck has safe enough driving behavior? We might consider adherence to road rules, no avoidable crashes, or being “reasonable.” The first two options probably won’t work in practice. And even the third is only a part of ensuring acceptable driving safety.
Adherence to road rules
A naïve approach to asserting that an autonomous vehicle (AV) is driving safely is to show that it adheres to road rules. While an AV that pervasively ignores road rules is certainly undesirable, we would quickly get very frustrated with one that adheres 100% to every single rule.
Some of this is because human drivers get sloppy, for example not coming to a full and complete stop at stop signs. The degree to which that happens varies by regional culture, as does the degree of law enforcement response to such things. However, saying “humans flout the law so robots should too” is a poor excuse — even if it really annoys surrounding human drivers when an AV obeys the speed limit or stops at stop signs. There are arguments that careful driving provokes human drivers to do dangerous things that are tricky to work through. My take on this is that we should figure out how much flex there should be in rules as a matter of public policy, and this boils down to the next point.
The real issue is that our road rules are designed to work with human drivers who make on-the-spot decisions to justifiably break road rules (or be reasonably safe even if they flex them in minor ways even without compelling justification). Our enforcement system takes justifiable rule breaking into account as well as law enforcement discretion. If a boulder is blocking your side of a two-lane country road in a no-passing zone after a big storm, do you wait in your lane for hours for emergency responders to arrive and set up traffic direction? Probably you just go around when the road is clear. And probably that is OK — so long as you are also OK with taking personal responsibility for any mishap caused by doing that in the presence of oncoming traffic.
An essential part of our current system is accountability for breaking road rules via tort and criminal law. You can break them if you think doing so is justified — but you are held accountable for any harm done. And you can be held accountable for undue risk imposed on other road users by reckless behavior. (Yes, revenue stream via speed trap is a thing; I’m talking about how our system is supposed to work, not dysfunctions that should also be addressed.) This same dynamic needs to apply to the manufacturers of AVs for any flexibility we give them to work in practice.
There are other situations in which road rules and signage might be conflicting, ambiguous, or just inapplicable. With human drivers we can say “do the right thing” and mostly that will happen. For computer drivers “always obey road rules” is likely to lack the flexibility needed.
For sure AVs should be programmed to obey road rules almost all the time. We might do a very heavy lift to revise road rules to be much more detailed and explicitly provide necessary flexibility. For sure clarifying road rules will be helpful by cleaning out the low hanging fruit of impracticality, ambiguity, and conflicts. But I’ll bet we can never get rid of all the conflicts, exceptions, and need for discretionary judgment in their interpretation. So there is a place for discretionary judgment — and also a place for accountability when that discretionary judgment is abused.
No avoidable crashes
We might instead take a position that acceptable driving behavior means no avoidable crashes. This has two issues: addressing harm beyond crashes, and pinning down what we mean by “avoidable.”
Saying that an AV is unambiguously safe because it has not actually hit another road user is taking too narrow a view. There are other negative externalities that come into play, such as blocking fire trucks on their way to an emergency that also matter. So avoiding crashes is great, but is only part of safety.
Pinning down “avoidable” is much more difficult. We might say that Newtonian physics is our guide, and that a crash is unavoidable on that basis. A physics-based approach has to account not just for braking and steering capability, but also time spent sensing, building situational awareness, and moving the physical parts of the car, such as closing brake calipers to start braking. Don’t forget force of friction, hills, and all the rest. Sure, that’s a start, but probably other crashes are unavoidable as well as a practical matter.
There is a temptation to build an engineering or mathematical model for “avoidable,” with several attempts from different sources (e.g., RSS, force fields, NIEON). While these models provide insight and partial support for assuring safety, in the end any crash that would have been avoided by a human driver is going to be a problem for an AV, even if the argument is “but it is better than humans on average and it meets our safety model criteria.” Arguing an AV met its engineering requirements is cold comfort to the victim’s family in a crash that could have been avoided with a broader but still reasonable view of “avoidable.”
The trickiest part of “avoidable” is on what time frame. Let’s say you accelerate towards a pedestrian in a cross-walk because at their current rate of travel they will be out of the way by the time you get there. That person trips and falls backward in front of your vehicle just as she was moving out of your lane, and you run her over. That is narrowly unavoidable because you could not brake in time. But if you had not been accelerating toward a pedestrian in the crosswalk, you would have had time to brake. So it was avoidable with slightly better driving behavior (which, in California, is actually required by road rules.)
On the other hand, if your car never leaves the garage, you’ll never hit a pedestrian in a crosswalk. That’s going too far. So how far back up the causal chain do you swim to define “avoidable”?
Being reasonable
Adhering to road rules and not suffering avoidable crashes both have a lot of good in them, but neither works on its own. Rather, what we really need is “reasonable adherence” to road rules, and not suffering “reasonably avoidable” crashes.
This, however, punts the problem to what we mean by “reasonable.” Fortunately, there is a solution to that in two parts.
Part 1: An AV should be considered to have reasonable driving behavior if its conformance to road rules, avoidance of crashes, and other negative outcomes are no worse than a reasonable human driver in comparable circumstances.
This is a lower bar than the AV industry is telling us is achievable by robotaxis that do not drink and do not text while driving and therefore have super-human capability (i.e., they claim they are already “saving lives”). But we’re seeing crashes due to software defects that end up with news stories indistinguishable from crashes by drunk and texting human drivers. So I’m willing to set the bar as low as human drivers and not demand more, at least for now, in terms of the driving behavior part of safety. Give it a few hundred million more miles and see where we end up. Note that as a practical matter this is not saying AVs need to behave identically to people, but rather that breaking road rules and crashes should have outcomes no worse than human drivers on a case-by-case basis.
Part 2: But how do we know what “reasonable” means? This question drives computer scientists crazy, because there is no formal specification for “reasonable.” It is not even possible as far as I know to fully describe “reasonable” as a set of informal requirements in classical software engineering terms. The edge cases overwhelm your ability to list everything.
“Reasonable” is a concept defined in tort law, and that is the definition we should use. See: https://en.wikipedia.org/wiki/Reasonable_person We should use it in a similar way too.
If a loss event occurs that involves an AV, we should treat the computer driver as if it were a human driver, and decide to blame (or not to blame) based on the same criteria of reasonable behavior we would have applied if it were a human driver in that same situation, using close analogies as required. We have a mature jury trial system available for this purpose, full of jurors who are domain experts on what they consider to be reasonable driving behavior (licensed drivers are a frequently-used pool from which to draw jurors). Simple as that.
While the lack of a formal specification seems a problem for implementation, there is a wonderfully ironic solution. Instead of complaining about the lack of a specification, machine learning folks should mine case law as a big data problem to model what “reasonable” means. We have decades of case law regarding human drivers to draw upon. Just make your computer driver avoid the behaviors that were deemed unreasonable by the court system. And extrapolate as best you can for novel edge cases. If you can’t solve that, you probably shouldn’t be putting driverless cars on public roads either.
Beyond just driving behavior
To be clear, there is more to safety than good roadmanship. But driving behavior is the most visible aspect of safety to most road users. So we will need to have a rubric for judging that as one portion of a much more complex question of “how safe is safe enough.” That rubric should be based on the requirements for reasonable driving we also place on human drivers.
A really practical proposal, thank you.
Professor Koopman a good piece. I read your piece as arguing that what we need is commonsensical evaluation of AV driving, something I agree with. In my view there are lots of situations in which an AV drives 'safely' strictly speaking, in that it does not hit anything, but also not well. I have argued that having driving instructors evaluate the driving of self-driving cars would be a better option what we are currently doing. The fact that AVs don't have judgment seems like a deal breaker to me.... I have argued this here: https://www.youtube.com/watch?v=cFhpraUkv_8