Remote Assistants for Autonomous Vehicles
In which the AV industry tries to have it both ways, leaving nobody accountable.
Some readers might have been surprised in recent months to find out that there are remote operators for robotaxis and other autonomous vehicles. Indeed, every company that is serious about deploying robotaxis and robotrucks has them. Some are remote drivers, complete with steering wheels, who either drive remotely or act as continuously attentive safety drivers. However, for many major Autonomous Vehicle (AV) companies the party line is that their remote assistants only provide on-demand driving hints to their vehicles, and have no role to play in safety because they are not actually driving. While there is a seductive appeal to thinking robotaxis just need a little help while they are “learning,” the safety case for such interactions is fundamentally broken. In reality, the remote assistance narrative is a way for the AV industry to dodge accountability for prematurely removing safety drivers.
All companies operating uncrewed vehicles on public roads have remote operators one way or another. Cruise has a remote operations center in Arizona that provides “wayfinding intel” or “help identifying an object” (CNBC story). Zoox has a remote assistance center in Foster City California that can provide path guidance (NY Times story/paywall). Waymo remote assistants can provide guidance to its vehicles (Ars Technica story), which as we will discuss below appears to include helping decide what color a traffic signal light is. And so on.
There is a profitability aspect to the ratio of human operators to vehicles on the road. The argument is that in-vehicle safety drivers dedicate a full driver to each vehicle, as well as present problematic optics for the next round of fund-raising. Remote operators might oversee multiple vehicles, helping achieved promised cost reductions and mobility benefits. But, the question we deal with here is to what degree these remote operators affect road safety when they need to intervene.
Remove Assistance vs. Remote Driving
The AVSC consortium of AV industry companies has published a document (free download here: AVSC-I-04-2023) that discusses this topic at some length, identifying the following types of remote operation interactions:
Remote interactions with the vehicle/fleet:
Remote driving
Remote monitoring
Dispatching
Remote Automated Driving System (ADS) assistance
Remote interactions with humans:
Customer support
Authority interactions (e.g., police, firefighters)
Other road user support
That AVSC document also defines two key concepts: (1) Remote Assistance, which is providing advice to a computer driver while the computer itself remains in charge of driving (and, implicitly, of safety), and (2) Remote Driving, in which a remote human actually performs the driving task, perhaps using a steering wheel and pedal controls at a remote location to control the vehicle’s motions, with that human driver necessarily taking primary responsibility for safety.
The industry position is that (simplifying only slightly) if the remote operator does not have a steering wheel, that must mean the computer on the robotaxi is driving, and therefore the computer — not the person — is responsible for safety.
Remote Driving has significant challenges with situational awareness, communication latency, and communication reliability. Remote safety driving is even more challenging, adding issues such as automation complacency slowing reaction time to a misbehaving computer driver after a long period of inactivity. However, as concerning as those topics are (link to white paper), for this piece we concentrate on the task of Remote Assistance.
Remote Assistance Activities
The first serious safety concern with remote assistance is that the computer driver in the car has to be smart enough to know when it needs to call home for help. The Cruise pedestrian dragging mishap got this tragically wrong (link to paper & talk), by initiating a pull-over maneuver with an entrapped pedestrian under the vehicle. The robotaxi sent a crash alert to its remote operations center, but did not wait for a response before starting the dragging portion of the mishap due to a misdiagnosis of the crash scenario. Presumably changes are being made to avoid this issue next time. However optimistic it might be, for our purposes let us assume this problem has been solved, and knowing when to wait for remote assistance is 100% accurate.
Once the phone-home process has been activated, is that remote operator responsible for safety or not? Attempting to draw a clean line between remote assistance and remote driving becomes problematic when trying to understand the safety implications of such arrangements.
Consider some of the functions that a remote assistant might perform, depending on the particular company and operational concept:
Interpreting unusual presentations of road signs and traffic signals
Interpreting ambiguous hand gestures by people directing traffic
Determining when to start moving in a socially negotiated traffic situation, such as taking turns for simultaneous arrival at a one-lane bridge or at a saturated 4-way stop intersection
Determining how to handle traffic patterns at a pop-up construction zone or emergency response scene
Determining whether an object is really a person or some optical phenomenon that can be ignored
Determining when it is acceptable to break normal traffic rules due to exceptional situations, such as entering an opposing traffic lane to go around an obstruction
Determining whether it is safe to move a vehicle after a crash that might have involved a pedestrian.
As the list goes on, it becomes more and more obvious that these activities are not merely providing advice, but amount to delegating life-and-death driving decisions to a remote human operator.
The key issue is not whether there is a remote steering wheel — it is whether the remote operator’s decision can cause harm to road users.
Remote Assistance Safety Issues
Hypothetical Case: Consider if the Cruise robotaxi in the pedestrian dragging mishap mentioned had indeed paused after initially striking that pedestrian in San Francisco to wait for instructions from a remote operator in Arizona. What if that operator had then told the robotaxi “OK to pull to side of road,” then ignored the visual and audio clues that a pedestrian was being dragged until the vehicle itself decided to shut down 20 feet later. How many people would hold such a human operator completely blameless? (To be sure, this is a counterfactual scenario — the real robotaxi did not wait for instructions, so remote operators had no chance to intervene. However, hypothetically changing the robotaxi’s behavior to wait for remote assistance reveals the critical nature human operators can be expected to play in practice.)
Real-World Case: In January 2024, a Waymo robotaxi proceeded through a red light due to an incorrect command from a remote operator. A cross-traffic moped driver who had a green traffic light fell and slid, presumably reacting to the Waymo vehicle presenting a collision threat as it entered the intersection. Fortunately, there were no reports of injuries. (Here is a Forbes article that makes it clear that bad advice came from the remote operator: https://bit.ly/3Y2V1mz ) The importance of this Waymo incident is that it shows the real-world potential for bad “assistance” from a remote operator to result in dangerous vehicle behavior. This is not just a theoretical issue.
The usual robotaxi company position has been that remote assistants are just there to provide advice, and that the on-board computer driver retains responsibility for safety. Superficially that sounds great – but in reality it presents a thorny problem for safety. It means that a remote human can give information that might directly cause a crash, but that human has no responsibility for that action because the computer retains responsibles.
How very convenient for the industry.
Dodging Accountability
From an industry point of view, shielding remote assistants from safety accountability is an exceedingly clever plan. Operating companies can, if they wish to do so, use unvetted, low-cost, low-skill labor in other countries to provide remote assistance services using non-safety-rated remote monitoring and control equipment. While some companies will insist they will uphold higher standards (or at least uphold those higher standards for now), there will be serious economic pressure to use the cheapest possible remote assistance once fleets scale up. It is foolish to expect economic pressure at scale to incentivize anything other than the lowest cost solution permitted by regulation by some, or eventually perhaps by all operators in this space. Even companies that insist they would never cut corners on safety will be vulnerable to being undercut on costs by less scrupulous competitors.
And just wait until some bright management consultant figures out that you can configure a non-safety-rated Large Language Model/chatbot to handle remote assistance requests. (Or have they already? It is hard to keep up on that front.) After all, if the LLM is not responsible for safety, hallucinations are not much of a problem. In fact, I would argue this type of outcome is inevitable due to economic pressure at scale.
Imagine if you are a pedestrian crossing in front of a robotaxi at night. That robotaxi is not sure if the light is red, or if the vague shape it sees in the crosswalk is really a pedestrian. A teenager half a world away who has never driven a vehicle, has no driver’s license, just started the job after a cursory one-day training period, and showed up at work sleep-deprived and just a little bit high today decides the light is green, and that the thing in the crosswalk is just a shadow. Because at the rock-bottom wages they are paying him, why should he really care all that much? Everyone told him the robotaxi is safer than a human driver and he’s just there to give some advice. The call center timer on the interaction is ticking down to a reprimand for being too slow. And the LLM assistant says “don’t worry, it’s fine.” So the incentivized action is to just slap the “proceed” button and move on to the next session.
That remote operator casually tells the robotaxi to go for it, resulting in the robotaxi killing the pedestrian.(*)
If that type of scenario were to happen today with a vehicle in Oklahoma (**) and a remote operator in a Central American country, there would be literally nobody responsible for a fatality. That would be true even if a human driver taking the same action in the same circumstance would be found negligent. In some other state in which the remote operator might be held accountable according to state laws, imposing that accountability on a foreign national living outside the US who was told they were not responsible for safety is going to be quite problematic. If you can even find that operator after the fact. And forget about the post-crash sobriety test.
(*) For those readers saying “what about lidar??” – recall that lidar did not stop a Cruise vehicle from hitting a city bus, apparently because camera-based behavior over-ruled lidar detections of that bus. (Link to article) And lidar did not stop a Waymo vehicle from hitting a utility pole due to a different sort of software defect. (Link to article) Or maybe this hypothetical situation involves a robotaxi without lidar. Additional note: for those who want to know more about remote operations for AVs, follow Missy Cummings.
(**) In Oklahoma the ADS computer is considered responsible for violation of road rules, such as running a red light. See the bottom of page 7 of the Act here: https://bit.ly/47JlGYG. While any company’s public relations team will strenuously object that their particular company would ever reduce costs to the point that such a hypothetical scenario becomes reality, we should expect in a competitive market operating at scale that we will see the maximum cost-cutting permitted by regulations. As a result of a race to the bottom in not “stifling innovation,” regulations in many states permit this type of scenario to play out.
There is little apparent consequence for poor standards in qualifying, training, and supervising remote operators. Because, after all, according to the industry narrative, they aren’t really “driving.”
Imposing Some Accountability
With current state laws there is a significant incentive structure to tell everyone that remote assistants have no role to play in safety, even when we have already seen their mistakes contribute to mishaps on public roads. This is not to say that remote assistants should be held to a standard of perfection, because people aren’t perfect. But the legal and regulatory systems need to address this situation one way or another to avoid continuing down a path in which harm done to road users leaves nobody responsible.
My preferred option is to hold the manufacturers of computer drivers accountable for any negligent driving by their systems while the computer is engaged. (More details here: https://bit.ly/3YdEjB7 ) If bad advice comes from a remote assistant, that is still the computer’s fault for accepting bad advice. If the computer cannot handle the situation, then it should cede full driving safety responsibility to a qualified human driver, ensuring safety during the transition process. This way either the computer is fully in charge or a human driver is fully in charge at any point in time. If computer drivers are not up to the task and remote drivers are infeasible (which will often be the case), then there needs to be an in-vehicle safety driver.
Another option for accountability is to hold remote assistants responsible and accountable for negligent driving decisions provided in the form of “advice” to a computer driver. At a minimum this means that remote assistants will need a driving license, performance monitoring, adequate situational awareness information, adequate control over driving behavior, and so on. This has all the pitfalls we see with moral crumple zone design patterns for supervised automation, in which a primary purpose of a human operator is to take the blame for a computer system malfunction. (SSRN Preprint) This is the path the industry is really embarked on once you strip down the misleading remote assistance narrative. Because what company is going to take the blame when they could instead hang an employee out to dry for behaving badly? (link to a cautionary tale) I strongly recommend against this path.
Either way, the inevitable safety consequences that come with a remote operator making decisions about driving behaviors, objects, and events present significant challenges to safety. Remote assistants will have reduced situational awareness compared to being in the vehicle. And they might not have adequate time to gain situational awareness if intervening in response to an on-demand request on short notice. Communication lag and communication disruptions are likely to be common. All these issues can make it infeasible for them to react properly to hazards presented by a quickly changing road situation.
Recommendations
Remote assistants and remote drivers should be within reach of the law in case of negligent or reckless behavior.
Moreover, to avoid moral crumple zone strategies, companies need to also be held accountable for unsafe remote operator behavior as well as unsafe computer driver behavior regardless of the corporate liability firewalls set in place by contractor relationships and the like.
To improve the situation, states should, at a minimum:
Require any remote operator of any kind to have a valid license for the type of vehicle being operated (e.g., a CDL for a heavy truck). A further requirement for a clean driving record is even better
Require defined training procedures and retention of training records, as well as periodic skill evaluation for remote operators. SAE J3018 is not a bad starting point
Require remote operators to be in a specified jurisdiction (for example, in the same state as operations) for accountability
Require the retention of auditable logs for which person was involved in each remote interaction for accountability
Name the system manufacturer as the primary responsible party for any mishap in which the remote driver was not fully controlling vehicle motion with a conventional control arrangement (steering wheel, accelerator, brake pedal). This provides some safety guardrail pressure without micro-managing remote operations approaches.
Consider computer drivers as a type of “driver,” with the same accountability and responsibilities as human drivers. The manufacturer should be held responsible for breaches in a duty of care for the safety of road users.
If manufacturers are unhappy with this list, in-vehicle safety drivers remain an obvious alternative.
What we should not do is continue with the fiction that remote assistants play no role in safety. No more pretending potentially unqualified remote assistants are just there to give harmless advice. It’s simply not true.
Prof. Phil Koopman has been working on self-driving car safety for more than 25 years at Carnegie Mellon University.
Thanks to Fred Perkins for his comments on a draft version.
Hello and thank you for this article !
A great input in all the ongoing work on hypervision/supervision/remote control ...
For information in France, the administration & industry have settled the framework for AV commercial services : The remote operator has to have a specific & dedicated training. To access these training, the operator has to have the driving licence of the vehicles he/she supervises.
I haven't yet understood your pedestrian exemple : maybe a prototype of AV may miss it. But how can an AV that has demonstrated a safety concept to be allowed on a commercial service on open road miss an obstacle crossing just in front of the vehicle ? Shouldn't it be a clear failure of the self-driving system, whatever the remote operator advice ?
Let's say it was at the edge of the ODD (at night, with building ...) and the AV asked the remote center
Open discussion & thinking :
1 - The vehicle is driving and is responsible for the driving safety,
2 - The remote control center module interacting with the driving tasks may ask the vehicle to restart but the vehicle may keep a robotic simple collision avoidance
-> if the platform says to the ADS "the door is closed" doesn't mean the platform is responsible of driving, the ADS keeps the responsability to move with this information
-> if the remote operator says "path is ok on this side" maybe the ADS should still be able to see and hard brake if necessary. Remote operator has overpassed the "comfort driving rules" but not the core anti-collision rules ?
although I see the AV as the only responsible for the safety of the driving (at the AV level) (but I love to be challenged), control center plays a role in safety for sure !
1° [system design] The control center is (as for driven operated mobility services) responsible for the operation (safety at the operation level : technical system applied on one location), including the passenger security, and the quality of service.
We often talk about pure driving (angle of the ADS or OEM) but if we analyse safety at the operational level, we see some additional Unexpected Events
eg : The vehicle goes into a dangerous street (riots, industry leaks ...) -> the AV will stop in front of the people in the street but having hundreds of demonstrators around the AV may be a unsecure situation for passenger -> We should have stopped the vehicle even before it can see what's in that street.
...
2° [operating procedures] control center is ensuring that all the procedures are correctly applied at any time
3° [continuous improvement] control center is ensuring constant monitoring, reporting and comparison between operations