The Ironies of Automation for Self-Driving Vehicles
Snoozin’ while Cruisin’ -- preview section from a new book-in-progress
Beyond a certain point, improving the driving capability of a computer can do more harm than good if it must still be supervised by a human driver to ensure safety. This seems counter-intuitive, but is a fundamental fact of human nature.1 This situation is due to what have become known as the Ironies of Automation,2 which apply to all automated systems, including robotaxis and any embedded AI system which relies on people for safety oversight.
For our purposes, there are three core issues involved with the Ironies of Automation. The first is that human operators lose proficiency at performing a control task (or never even get a chance to develop true proficiency) when that task becomes automated, but then are expected to intervene in difficult situations due to an automation failure. The second is that human operators have difficulty maintaining situational awareness if they are observing rather than controlling a system’s operation, which makes it much more difficult to successfully intervene when something does go wrong. The third is that a control system is often automated to enable processing more information and doing so more quickly than a person is capable of, making it impractical for a person to act as a check-and-balance for correct behavior in a task being executed in a way that is beyond their own abilities.
Put simply for automated vehicles, the more capable an automated driving system is, the less likely it will be for a human driver to successfully supervise its safe operation. This is not a moral failing on the part of the driver or supervisor — it is an inevitable cluster of related issues that affect all people working with automation. While training and education might smooth out the rough edges of these issues, these are innate limitations baked-in to all people.
Automation complacency
Human drivers are prone to automation complacency,3 in which good performance over a comparatively short period of time lulls a human driver into over-trusting the automation.
A short-term contribution to automation complacency is a well-known limit to any person’s ability to concentrate intensely on a boring supervisory-type task. We’ve known this since circa 1950 from studies showing performance degrading after 30 minutes unless a rest break is taken.4
Longer term, after a few hours (or perhaps a few dozen hours) of flawless automation performance, all but the most skeptical and cautious human drivers are likely to start over-trusting that the automation is safe. That in turn makes it likely they will reduce the amount of attention they pay to a supervision task. Bored humans also have a tendency to find new tasks to keep their mind occupied, such as using their mobile phone. This is not an explicit choice to be irresponsible; it is human nature.
When coupled with disinformation such as being told that the automation is a better driver than they are, people are likely to make poor choices, especially while impaired. For example, they might decide that if automation is better than they are, they can trust it to drive them home after a night of drinking. While some might blame the people who make this choice for having poor moral character, taking into account how people can be expected to behave in such a situation, the reality is that this is closer to entrapment.5
Automation bias
Humans tend to over-trust automated systems, resulting in automation bias, in which people tend to prefer automated decisions even if there is evidence available to suggest the automation is wrong. This might show up in driving as a decision to let a computer driver do something potentially dangerous such as following a lead vehicle too closely, on the assumption that the automation will only take safe actions. By the time a supervising human driver realizes the automation has made a bad choice, it might well be too late to salvage the situation.
Automation bias has already created problems at scale in non-AI systems. One article discusses three people wrongfully arrested based on police too readily believing incorrect AI identifications.6 The cases were eventually dropped, but not without a prolonged process that included 10 days of jail time for one of the accused. As AI facial recognition has become more available, automation bias in using that technology has become more of a problem.7 In the moment, it is simply too easy to trust an AI-created answer without digging deeper into the truth.
Automation bias can be made worse by consumer overconfidence, perhaps because they have been told that an automated driving feature is safer than a human driver, or that an automated driving feature “fully” drives by itself.8
It is common for consumers to be confused about the automation features provided by their vehicle, especially when car-makers over-hype capabilities. This tends to lead AV users to overestimate the intended automation capability. In particular, drivers tend to overestimate their knowledge about self-driving features. And, drivers who report higher self-rated knowledge tend to be more accepting of the technology.9
To illustrate some of the issues involved, here is a simple thought experiment: consider an automated driving system that uses a very short following distance behind a leading vehicle to increase road capacity. Let us say that following distance is too short to be safe for a human driver, but safe enough for the faster reaction times of a computer driver. If the automation fails during an unusual situation, the human driver has no chance to ensure safety because doing so requires a Perception-Reaction Time (PRT) shorter than is humanly possible. However, the human driver is likely to be comfortable supervising that too-short following distance if the computer driver has driven them for hundreds of trips without ever failing to stop for a leading car as required. On the day that the automation fails, the human supervisor will be unable to prevent a crash.
Consider another thought experiment: a particular road is prone to having lanes partially obstructed by road sweepers. A computer driver’s usual behavior is to wait until about a second before collision to swerve partially into the adjacent lane to avoid impact with a road sweeper in the vehicle’s own lane. One day, a supervising driver notices that the computer driver has not swerved at its customary point, but gives it another fraction of a second due to over-trust built by hundreds of previous successful swerves (and not really paying close attention, because watching a nearly-perfect computer driver is incredibly boring!). But that day the computer driver does not swerve, and by the time the supervising human realizes what is happening, it is too late to respond due to being well within the limits of that driver’s PRT. A fatal crash ensues.10 The crash is blamed on the human supervisor being distracted, old (with a presumed slow PRT), or not having kept in practice at emergency takeover operations. Commenters who point out that all of these issues are inevitable given the ironies of automation are shouted down with a driver error narrative orchestrated by the manufacturer’s supporters.
The fundamental question here is: if we think automation is better than a human driver, why would we expect that board, complacent, less-capable, out-of-practice human driver to be able to assure automation safety by intervening when the automation itself cannot handle a situation? This problem has long been identified as an issue with AV technology.11
There is a way out of this dilemma. We need to make sure that all automation failures are inherently benign. This gives the human supervisor plenty of time to regain situational awareness and take over manual control. But that starts looking less like Supervised Automation and more like Autonomous Mode operation that guarantees the vehicle is put into a safe state for as long as it takes for the operator to regain control.
Mode confusion & alarm fatigue
Another struggle for automated systems is mode confusion, in which the human supervisor misunderstands the current operating mode of the system.12 One safety investigation involved crashes in which an automated steering feature had been unintentionally disengaged before the crash.13 The thinking that was due to a poor human interaction design the disengagement was unintentional. The human supervisor thought that the car was in automated steering mode, but the computer driver was actually in manual steering mode. Vehicles seem to have crashed before the human driver could notice and react to a steering control lapse caused by an unanticipated loss of steering automation.
The problem of mode confusion has been known for decades, having contributed to many crashes.14 An especially notable mishap was the tragic loss of Air France flight 447 over the Atlantic Ocean in 2009.15
Humans also struggle with alarm fatigue, in which constant alarms quickly become ignored, or are disabled on purpose due to annoyance.16 A close cousin is alarm overload, in which numerous minor nuisance alarms distract an operator from a different alarm that signals something extremely bad is happening. Another issue is if alarms with different severity have the same or similar audible or visual indications, making it easy to overlook a high-severity alarm in a burst of simultaneous low-severity alarms.
Summarizing mode confusion and alarm fatigue as a pair, what you want is a system which makes it intuitively clear to a human operator/supervisor what the current operating mode is, presents the most urgent currently active alarms first, and makes it intuitive to take action appropriate to the current operational mode. This is no small task, but is essential for operational safety.
This is a sneak peak section from an upcoming book I’m writing. Stay tuned for more information.
For example, see this IIHS study: https://lindseyresearch.com/wp-content/uploads/2020/10/NHTSA-2019-0037-0015-IIHS_Study_on_Driver_Disengagement_-_Reagan_et_al_2020.pdf
The themes in this posting revolve around the concept of the Ironies of Automation, with some additional thoughts beyond the seminal paper that coined the phrase. Anyone designing automated vehicle features who has not read that paper ––– should not be designing them. See: Bainbridge, Lisanne (1983-11-01). "Ironies of automation". Automatica. 19 (6): 775–779. https://doi.org/10.1016/0005-1098(83)90046-8
This Wikipedia article covers both automation complacency and automation bias: https://en.wikipedia.org/wiki/Automation_bias
Early work by Mackworth showed decreased vigilance after the first 30 minutes. See: Mackworth, 1948, “The breakdown of vigilance during prolonged visual search” at https://journals.sagepub.com/doi/10.1080/17470214808416738 (paywall)
Here is a story about someone bragging they did just this and got away with it: https://jalopnik.com/probably-drunk-tesla-owner-let-fsd-beta-take-the-wheel-1849964850
Here is a story about someone arrested for doing this: https://www.independent.co.uk/news/world/americas/crime/tesla-self-driving-drunk-driver-b1924303.html
And here is a crash involving a drunk driver trusting a Supervisory Automation feature that resulted in a crash into a police car: https://www.drive.com.au/news/tesla-autopilot-police-car-crash-report/
Simply scolding people for this behavior is not going to stop the next crash.
From a lawsuit attorney: “One of the arguments we make is you can’t get more self-driving than fully self-driving” https://www.washingtonpost.com/technology/2024/07/11/elon-musk-tesla-full-self-driving/
“Consumers are still exposed to widespread miscommunications and misuse of terminology” See page 1171 of: https://journals.sagepub.com/doi/pdf/10.1177/21695067231192860
This scenario is consistent with perhaps the first Tesla Autopilot fatality:
https://jalopnik.com/the-first-fatal-tesla-autopilot-crash-may-have-happened-1786626985
An alternate term is mode error. See: https://en.wikipedia.org/wiki/Mode_(user_interface)#Mode_errors
Presumably automation complacency resulted in a too-slow reaction to detect and compensate for subsequent lane departures and crashes after automated steering was inadvertently disabled. See: https://static.nhtsa.gov/odi/inv/2022/INCR-EA22002-14496.pdf
See this 1998 NASA paper:
https://shemesh.larc.nasa.gov/fm/papers/butler-etal-dasc98.pdf
I agree with most of what you are saying here, but I think your #3 objection seems to assume that what matters most is processing power, and that draws a false analogy between people and technology. It is certainly true that people could never process the lidar, camera, and radar data at the same speed as an AV, but we make up for that by processing information a lot smarter. We can therefore anticipate troubles on the road before they occur, and you can see that ability at work when Tesla drivers take over from FSD.
I agree with everything else, and I have always argued that because it is impossible for humans to oversee fully automated driving competently, it is imperative that people continue to do part of the driving task. That is, we should just not allow the automation to steer the car, only let it break and accelerate. That forces people to remain engaged with the driving, and will allow them to take over when something goes awry with the breaking and accelerating. By the way, driving like this is still much more relaxing than driving without automation, thus is still a good consumer value. It has the additional advantage that it is much easier to understand what the automation does, so you can tell when it is going wrong.
I took some Waymo in SF rides recently and it really did quite brilliantly for over an hour. But then, all of a sudden, it drove down the wrong side of the road, passing a row of cars waiting for a traffic light. Of course we could not do anything, but it goes to show that these systems even when fantastic in the majority of situations, are not to be trusted completely.
https://www.youtube.com/watch?v=LOdnrmIuNww
As always, I appreciate your thoughtful work.
I just finished a youtube on your subject. We agree but have slightly different takes on the takeaways. Here is the full thing...free, no signups, and can be sent around. But...it's an hour to watch. I call it "Full Self-Driving Horses" https://www.youtube.com/watch?v=G4gjsCa65cY