Time to Formally Define Level 2+ Vehicle Automation
Or maybe just combine Level 2+ with Level 3 to be "supervised" automation and be done with it
We should formally define SAE Level 2+ to be a feature that includes not only Level 2 abilities but also the ability to change the vehicle’s travel path, for example by automatically making turns. Level 2+ should be regulated in the same bin as SAE Level 3 systems.
[This is a refinement of a blog piece from June 2024 posted in honor of the Tesla robotaxi reveal tonight. Please consider this my normal weekly posting.]
There is a lot to unpack here, but ultimately doing this matters for road safety, with much higher stakes over the next 10 years than regulating completely driverless (Level 4/5) robotaxi and robotruck safety. Because Level 2+ is already on the roads, doing real harm to real people today.
First, to address the definition folks who are losing it over me uttering the term "2+" right now, I am very well aware that SAE J3016 outlaws notation like "Level 2+". My suggestion is to change things to make it a defined term, since it is being used with or without SAE's blessing, and we urgently need consistently defined term for the things that everyone else calls Level 2+ or Level 2++. (Description and analysis of SAE Levels here. Myth 5 talks about Level 2+ in particular.)
From a safety point of view, we've known for decades that when you take away steering responsibility the human driver will drop out, suffering from automation complacency. There have been enough fatalities from plain features said to be Level 2 (automated lane keeping + automated speed), such as cars under-running crossing big rigs, that we know this is an issue. But we also have ways of trying to address this by requiring a combination of operational design domain enforcement and camera-based driver monitoring. This will take a while to play out, but the process has started. Maybe regulatory intervention will eventually resolve the worst of those issues. Maybe not -- but let's leave that for another discussion.
What's left is the middle ground between next-gen-cruise-control features (lane centering + automated speed) and vehicles that aspire to be robotaxis or robotrucks but aren't quite there. That middle ground includes a human driver so the designers can keep the driver in the loop to avoid and/or blame for crashes. If you thought plain Level 2 had problems with automation complacency, Level 2+ says “hold my beer.” (Have a look at the concept of the moral crumple zone. And do not involve beer in driving!)
Expecting a normal human being to pay continuous hawk-like attention for hours while a car drives itself almost perfectly is beyond credibility. And dangerous, because things might seem fine for lots and lots of miles — until the crash comes out of the blue and the driver is blamed for not preventing it. Simply telling people to pay attention isn’t going to cut it. And I really have my doubts that driver monitoring will work well enough to ensure quick reaction time after hours of monotony.
People just suck at paying attention to boring tasks and reacting quickly to sudden life-threatening failures. And blaming them for sucking at this due to being normal human beings won’t stop the next crash. I think the computer driver is going to have to be able to actively manage the human rather than the human managing the computer. The computer driver will have to ensure safety until the human driver has time to re-engage with the driving task (10 seconds, 30 seconds, maybe longer sometimes). That sounds more like a Level 3 feature than a Level 2 feature from a regulatory point of view.
Tesla FSD is the poster child for Level 2+, but over the next 5 years we will see a lot more companies testing these waters as they give up on their robotaxi dreams and settle for something that almost drives itself -- but not quite.
The definition I propose as Level 2+ is a feature that meets the requirements for Level 2 but also is capable of departing its current lane of travel automatically.
To put simply, if it drives you down a single lane, it's Level 2. But if it can make turns or changes lanes (i.e., intentionally departs the current lane boundaries) without an explicit driver command to do so, it is Level 2+.
One might pick different criteria, but this has the advantage of being simple and relatively unambiguous. You are at Level 2+ once you start doing intersection turns automatically, maneuvering to take exit ramps, and so on. In other words, almost a robotaxi -- but with a human trying to guess when the computer driver will make a mistake and then potentially getting blamed for a crash.
No doubt there will be minor edge cases to be clarified, probably having to do with the exact definition of “current lane of travel” such as parking lots and roads without lane markings. But since most such features will only be worth paying for if they work on roads with clear lane markings, as a practical matter those nuances probably do not matter for classifying which feature is in a car. The point here is not to write detailed legal wording, but rather to get the idea across that being able to “navigate” (i.e., not just be a lane-following cruise control system) is the litmus test for Level 2+. Once drivers do not feel a need to continuously pay attention to keep from crashing, there is a difference in kind to the automation, and that dividing line is what we are looking for here.
From a regulatory point of view, Level 2+ vehicles should be regulated the same as Level 3 vehicles. I realize Level 2+ is not necessarily a strict subset of Level 3, but the levels were never intended to be a deployment path, despite the use of a numbering system. I think they both share a concern of adequate driver engagement when needed in a system that is essentially guaranteed to create driver complacency and slow reaction times due to loss of situational awareness.
How does this look in practice? In the various bills floating around federal and state legislatures right now, they should include a definition of Level 2+ (Level 2 + intersection/interchange capability) and group it with Level 3 for whatever regulatory strategy they propose. I recommend simply defining the group of Level 2+ and Level 3 as Supervisory features (as in the driver is supervising an automated driving function). It is as simple as that.
If the SAE ORAD committee wants to take up this proposal for SAE J3016 that's fine too. (Bet some committee members are reading this — I’m happy to discuss at the next meeting if you’re willing to entertain it.) But that document disclaims safety, declaring safety to be out of its scope. What I care about a lot more for this essay are the regulatory frameworks that are currently near-toothless for the not-quite-robotaxi Level 2+ features already being driven on public roads.
Note: Based on proposed legislation I've seen, pulling Level 2+ into the Level 3 bin is the most urgent and viable path to improve regulatory oversight of this technology in the near to mid term. If you really want to do away with the levels I have a detailed way to do this, noting that the cut-line for Supervisory in that proposal is at Level 2 rather than Level 2+, but is otherwise compatible with this essay. If you want to use the modes but change the cut line, let’s talk about how to do that without breaking anything.
Note: Tesla fans can react unfavorably to my essays and social media posts. To head off some of the “debate” — yes, navigate-on-autopilot counts as Level 2+ in my view, as does FSD (Supervised). We have the crashes to prove this is an issue. And no, Teslas are not dramatically safer than other cars by any credible analysis I’ve ever seen.
BTW I have to wonder where Tesla got the “supervised” term from for “FSD (supervised)”, since AFAIK I’m the only one who has used that term for vehicle automation.
This, and many other things are the consequences of elections. Empowering the NHTSA to set standards with real enforcement power is a philosophy some of us might think is the proper role of government. I spent most of my work career adjacent to heavily regulated industries. While it was common we might complain about regulation, standards that value the public and highlight and penalize the outliers is almost always a good thing in my opinion.
Humans are terrible at supervising mostly accurate software, we have known this for decades.
See the problem of humans overseeing radiography image analysis and missing cancers that the AI missed, which is just as potentially lethal.