Laundering Accountability with Embodied AI
Part 1: Remote Assistants or: how the robotaxi industry stopped worrying and learned to love overseas remote operators
There is a fundamental, crucial embodied AI (eAI) safety concern behind the recent headlines surrounding Waymo’s use of overseas remote operators for their robotaxis. The real question is not whether existing terminology standards call them “drivers,” or whether they have steering wheels.
The question that matters is whether accountability is being swept under the rug for inevitable mishaps involving humans “assisting” embodied AI technology.1 In a practical sense, that is exactly what is happening. We might end up in a place where neither the designer or operator of an eAI system such as a robotaxi can be held accountable for harm under existing laws if remote operators are held to be “assistants” with no accountability for mishaps.
So let’s explain why this is so, and in part 2 explain what might be done to clean up the impending messes. (If you are coming in cold to this discussion, a previous discussion on remote assistants will help you warm up.)
A plan for evading accountability for reckless robot behavior
The Autonomous Vehicle (AV) industry has been waging a multi-year campaign to ensure that manufacturers have little to no accountability for reckless behaviors by the AI-driven vehicles they build. This involves state laws making owners, fleet operators, or even the computer itself responsible2 if a computer software fault causes a crash or traffic law violation. So long as the manufacturer is off the hook for reckless driving behavior by the computer drivers they build, they are happy. And, at least until recently, they have been winning the legislative and public narrative battles.
The issue that has recently been getting attention is the involvement of humans in what were being publicized as “autonomous” systems. Turns out, there really is a human behind the curtain … sometimes.
There are two inescapable truths here regarding the current embodied AI safety situation. (1) Human operators are needed to handle recovery from encountering edge cases. And (2) humans will make mistakes, especially when dropped into high-risk edge case situations as remote operators under time pressure. Neither of these means safety is impossible for careful operators, nor do they mean such systems cannot be deployed in a productive way. But put together, they complicate the issue of safety.
Right now, any company deploying an eAI system enjoys the legal high ground for a mishap caused by their technology. They are immune to tort negligence lawsuits because that type of negligence applies to people, not products. Anyone harmed by a piece of equipment needs to pursue product liability, which is expensive, difficult, and more likely to result in a loss for crash victims due to a highly asymmetric playing field that favors car companies. There is a reason that car companies in particular are more comfortable in the product liability lane. But product liability is simply the wrong tool for the job.
AV developers are highly incentivized to evade tort negligence liability, and funnel everything into product liability. To do this, they need a story in which no person has made a mistake that contributes to a mishap. But since people will be involved and will make mistakes that do contribute to mishaps occurring, the industry needs a clever blame architecture for their system.
That plan is to launder human operator mistakes through an unaccountable AI system.
Calling a friend for help
Any useful embodied AI system will occasionally get stuck. If it is designed to be safe, it will recognize an encounter with a situation it has not been properly trained for or cannot otherwise handle (an edge case). It will then need to maintain some sort of safe holding pattern (pause operation; pull to side of road; circle mid-flight; de-energize actuators; etc.) until it can get help from a human to sort things out.
Getting help to handle edge cases does not mean the technology is a failure. Rather, any really useful embodied AI system will encounter novel situations and need help. Doing so safely is a feature to make the system useful even in novel operational conditions, not a problem.
Pressing safety questions regarding remote assistants are:
What happens if eAI fails to seek remote assistance when it should?
What happens if eAI cannot maintain acceptable safety while waiting for remote assistance?
What happens when a remote assistant gives bad advice to the eAI system, resulting in a loss event?
All three questions matter. But given the recent doubling down by the industry on the “remote assistants aren’t drivers” narrative, let’s focus on the third question. What happens if the remote assistant makes a mistake that substantively contributes to a loss event?3
We’ll use a robotaxi example to illustrate how slippery the distinction really is between a “remote driver” (who drives) and a “remote assistant” (who, according to AV companies, is not a driver). This is just an example of a much wider-scale problem that is inevitable for most, if not all, safety-critical embodied AI systems.
A thought experiment on remote driving
To illustrate the issue, consider a hypothetical fixed route shuttle system that follows a fixed path on a roadway, designated by painted lines or an embedded guide wire. Think streetcar or cable car, with an electronic guideway on public roads shared with other road users rather than steel tracks. Without loss of generality, we consider only stopping and going at traffic lights as the sole driving task to keep things simple.
Scenario (1a): There is a human operator in the vehicle with a speed control in the form of an accelerator pedal and a brake pedal. There is no steering wheel, but I think we are safe in saying this person is still a driver for practical purposes.
The human driver of our streetcar has stopped at an intersection. They look up and for some reason think the traffic light has turned green, so they hit the accelerator. But the light is really red, and they crash with a crossing light mobility user. Assuming no extenuating circumstances, I think we are safe in saying that the human trolley driver is responsible for that crash.4 If they had not accelerated, no crash would have occurred,5 and road rules make it very clear they should not have accelerated into a red traffic light.
That human driver probably works for a paycheck. If there is a crash, any victims could seek compensation from the company employing the driver, with the operational legal mechanism being tort negligence liability. Due to operating a vehicle on a public road, that human driver has a duty of care for the safety of other road users, and can be held liable for failing to exercise reasonable care, such as by running a red light. As an employee, the trolley service company can be dragged into any lawsuit.
Insurance might cover some of the loss. But victims would be entitled to pursue a lawsuit if insurance claims were denied or victims feel insurance payouts are insufficient to address the harm done.
Scenario (1b): Someone figures out that replacing big trolley cars with a ride hail model provides better capital efficiency and more responsive service at some times of day. So ride hail drivers hook their vehicles up to the system with an adapter box. That box handles the steering by following the electronic tracks. Now drivers just need to work the accelerator and brake pedals to respond to traffic lights, reducing driver workload.
The human operator is still a driver. Whether the ride hail network company can remain unentangled from tort negligence liability for crashes due to running red lights is unclear due to contractor vs. employee status legal nuances. Regardless, there will be a significant incentive for the ride hail network provider to ensure that human drivers have robust insurance coverage, likely well in excess of typical civilian human drivers. That way things will seldom, if ever, come down to a high-stakes courtroom scenario that threatens spilling over onto the ride hail network company.
For our purposes, both scenario (1a) and (1b) lead down similar paths in the following scenarios. There are human drivers and associated companies concerned with dealing with potentially expensive tort negligence lawsuits for mishaps.
Scenario (2): Now move that human operator from Scenario 1 into a remote operations center. Give them a video feed and realistic vehicle controls. Assuming that a reasonable driver with appropriate training and qualifications on that equipment would have seen a red light on their video display, that remote operator is in the same situation with the same responsibilities at a red traffic light as an in-vehicle operator.
Assuming there has been no equipment failure, that remote driver should be held responsible for accelerating into a red traffic signal, just as if they had been physically in the vehicle.
Scenario (3): Replace the accelerator and brake pedals with a touch screen with two marked buttons on it: “stop” and “go”. The remote human driver is still fully in charge of vehicle motion.
Still a driver, right? Being a “driver” is not necessarily about being in the driver’s seat, and is not necessarily about having conventional vehicle controls. Deciding to “go” and sending a command that makes the vehicle “go” still makes someone a driver.
If the driver presses the “go” button at a red light, causing the vehicle to enter an intersection against a red light, should that driver be held responsible for a crash? The answer still needs to be yes. They are still driving in a very real sense.
Scenario (4): Change the software so that the vehicle is partially automated. It makes stop and go decisions as best it can, but the remote human driver is responsible for safety. The vehicle will stop at red lights and go at green lights on its own. However, the automation computer sometimes might make a mistake, and the remote operator is tasked with intervening with a “stop” in case the vehicle misses a red light, and pressing “go” if the vehicle gets stuck at a green light.
The driver is now acting as a remote safety supervisor, in a role comparable to SAE Level 2 automated vehicles. They are watching the vehicle drive, and are responsible for stepping in as needed to ensure safe operations. So they remain responsible for crashes caused by failing to press the stop button if the vehicle tries to go through a red light, or pressing the go button at a red light to start vehicle motion, because a safety supervisor is still a driver for a road vehicle.
Scenario (5): The vehicle is still partially automated as in scenario 4. However, to improve safety the system is modified to detect when it is uncertain about a light color. When it is uncertain about the light color, the eAI computer stops the vehicle at the light and displays a prompt: “Is this light green or red?” The operator is instructed to press the red “stop” control button if the light is red, and the green “go” control soft button if the light is green.
The driver is still a safety supervisor at other times as with scenario 4. This system simply has an added prompt to attract the driver’s attention when the automation is uncertain as to traffic light color.
Scenario (6a): The automation has gotten so reliable that a constant remote safety driver is no longer needed. The system has two operational modes: autonomous and remote supervised. During autonomous mode the system is entirely on its own. However, whenever the logic to trigger the scenario 5 prompt of “Is this light green or red?” is triggered, the system switches back into remote supervised mode and the remote operator becomes the driver.
Remote drivers are organized in a call center arrangement, and only connect with a vehicle when the prompt is displayed asking for advice about light color. Autonomous operation mode resumes after the remote operator has confirmed red/green by pressing “stop/go” and the interaction session ends.
If the remote operator makes a mistake on traffic light color, they are the driver during that remote supervision session. They hold a responsibility for safety, because they pressed the “go” button when they should have known they were at a red light, and the vehicle moved because they pressed the “go” button in that situation.
If the vehicle fails to ask the remote operator for input, or fails to stop before the traffic light when waiting for a remote operator to connect, then the vehicle automation has failed. But that is beyond the scope of this discussion and relates to the other safety areas we mentioned earlier.
Scenario (6b): After a mishap, a crash investigation reveals that the source code for the Scenario 6a system treats the remote operator input as an input to a machine learning based autonomous operational system.6 The eAI system has been trained to trigger motion changes responsive to the stop/go inputs from the remote operator, which otherwise have no direct connection to conventional motion controls. Thus, the machine learning software can be said on a technical basis to be “responsible for all real-time driving tasks and decision-making”7 and the stop/go buttons are just two of a myriad of data inputs into a machine learning-based AI system.
In practice, the analysis finds that the machine learning training results in a system essentially guaranteed to move the vehicle through an intersection if the remote assistant is asked to choose and presses the green/go button. Because that was the point of asking the remote assistant to confirm the color of the traffic signal.
The remote operator is still a driver, unless you think that somehow using machine learning-based software instead of conventional software to route data between a greeen/go button and the vehicle motors spinning somehow changes things. (If so, I’d really like to understand why. Regardless, consider the implications of that approach in the Gaming the System section in part 2 of this essay.)
The get out of jail free card for negligent remote driving
Scenario (7): The leadership of the trolley company determines that having a trolley with a part-time manual operating mode is undesirable for generating investor interest. Fully Autonomous is what will raise the next billion dollars they need to keep innovating.
No technical change is made to the system whatsoever. However, the remote operator job description is changed to “remote assistant.” The company web page is changed from stating that remote operators intervene to control motion when requested by the automation to instead say: “remote assistance personnel provide advice and support to the trolleys, but do not directly control, steer, or drive the vehicle."8
The word “directly” is doing some heavy lifting here. On a technical basis a human operator’s “go” command is being scrubbed through a machine learning system to make the vehicle move, making it “indirect” in some sense. It is unclear why running that same signal through conventional control software would not also be “indirect” for a human driver in the vehicle by that same logic. The apparent claim is that because this is not conventional software, that somehow breaks the highly repeatable causal connection between pressing “go” and vehicle movement.
The trolley company argues that based on this characterization of remote assistance, their remote operators are not drivers. The automation alone is “responsible for all real-time driving tasks and decision-making” as per Scenario 6b. So it does not matter if the remote assistants are overseas, because they are not responsible for driving. That allows use of cheap labor markets and increased profitability.
The remote operator and the trolley company (or ride hail company) are claimed to have no responsibility for incorrect operational decisions by any remote operator, because there is no person driving, just a machine. Any problem must therefore be a design defect.
When is a remote driver not a driver?
Absolutely nothing operational or technical has changed between Scenario 6b and Scenario 7. The only changes are the story being told by company leadership and the titles of job descriptions.
Somehow, the AV industry would have us believe, calling a remote operator an “assistant” instead of a “driver” means the remote operator no longer has any responsibility for safety.
The above scenarios are not intended to be an exact description of the progression of any particular company. But they are plausible as an aggregate industry path to robotaxis, with the addition of steering and more complex operational considerations added to the mix. The described progression is one that might not be so different from the path taken by Chinese robotaxi companies that have remote monitors watching multiple cars. It might also be close to the path Tesla is already following.
The problem presented is where, exactly, in the progression a remote driver responsible for safety becomes somehow not a driver and not responsible for safety. There does not seem to be anywhere on the path where it is obvious that the remote operator is not a driver for reasons other than the company claiming that has somehow occurred.9
I would argue that any remote human who is providing “advice” to an automated driving system that has a high probability of changing vehicle behavior in a way relevant to safety is in a very real sense a remote driver that should have a duty of care for the safety of road users.10 Even if they are just pressing a stop/go button. Even if they are just responding to a multiple choice prompt of “is this traffic light red or green?” with a mouse click instead of a button. If a line is to be drawn here, it cannot simply be based on public relations talking points.
If a remote assistant can say “this light is green” when the light is red, and that results in a crash, it is disingenuous to claim that they have no responsibility whatsoever for the mishap. The scenario of driving through a red light in response to a remote assistant input is not hypothetical — it has already happened with a Waymo robotaxi.11
In part two of this essay I’ll describe the implications for other eAI systems beyond robotaxis, and the ways this issue can be fixed.
… continued next week in part 2 …
Meanwhile, for those wanting to dig deeper, I have previous articles on different angles of this same topic:
Related, and also worth reading are pieces from Junko Yoshida:
Robotaxi Teleoperators: We Know They’re There. What Do They Know?
Behind Waymo’s ‘Independently Audited’ Robotaxi Teleoperation
Phil Koopman has been working on self-driving car safety for about 30 years, and embedded systems for even longer. For more on applying AI, see his new book: Embodied AI Safety.
Mishaps will happen even with generally safe technology. Victims will need to have access to a reasonable compensation system when they do, just as victims have access to such a system for human-operated complex systems.
This is the law in Oklahoma. It’s laughable because computers are not legal people and cannot be accountable for anything. But the AV industry got their way and it is the law nonetheless. Link.
This is not a legal essay, and IANAL. So I stay away from the phrase “proximate cause.” See: https://en.wikipedia.org/wiki/Proximate_cause
I do not opine on a penalty that should be assessed. For this essay I assume human drivers are not put in a no-win situation, and hold them to having the same duty of care one would expect of a human driver in any other road vehicle.
This is intentionally paralleling the “but-for” test relevant to legal proximate cause. If you think assigning responsibility to the human vehicle driver for this crash is debatable, then assume the human driver did this on purpose to cause a crash. Can you really argue that they should have no responsibility for their actions if they did it on purpose?
Would a robotaxi team build a system in which a safety critical “stop” command was merely an input to a big computer running machine learning? Absolutely. This is not hypothetical.
See Waymo’s detailed letter to Senator Markey from Feb. 17, 2026 at https://assets.ctfassets.net/7ijaobx36mtm/7E5uOzS5F7Z1yuFoz27BIc/680a27f89a3aae48977db655a5f45005/Sen._Markey_RA_Letter_Waymo__Response.pdf
Paraphrased from a Waymo statement about their remote assistants. Note that Waymo did not necessarily take the precise path through scenarios indicated, but did more or less end up at this final scenario. See: https://www.reuters.com/technology/waymo-defends-use-remote-assistance-workers-robotaxi-operations-2026-02-17/
They typically do this by referring to standards wording such as SAE J3016 that define terms such as “remote assistant”. However, safety is out of scope for that standard, and the standard was not written with an eye to the legal concept of duty of care or the nuances of the legal definition of a “driver” as it varies from state to state. I think the relevant engineering standards do not reflect legal and ethical reality, even if some do find them reasonable in a purely engineering context. Citing a definition that does not address the question at hand will not make this issue go away.
The artificial separation of the Dynamic Driving Task (DDT) in J3016 from other driver responsibilities also gets in the way of this topic, but is beyond the scope of this piece.
“In January [2024], an incident took place where a Waymo robotaxi incorrectly went through a red light due to an incorrect command from a remote operator, as reported by Waymo. … The moped driver, presumably reacting to the Waymo, lost control, fell and slid, but did not hit the Waymo and there are no reports of injuries.” See: https://www.forbes.com/sites/bradtempleton/2024/03/26/waymo-runs-a-red-light-and-the-difference-between-humans-and-robots/



With this well-argued piece, @Phil Koopman clarified where the conventional discussions on “a remote assistant can never become a driver” have gone wrong.
In a different but equally duplicitous initiative, AV companies are lobbying at state and federal levels for laws allowing AV operation within its ODD while at the standards level asserting that the ODD is a design requirement, beyond the reach of regulators, and therefore establishing that regulators cannot modify the ODD. This leaves no way for state or local authorities to exercise authority over interactive operational limits such as excluding AVs from fire response, failed bridges, demonstrations, etc. This would insulate the AV companies from not only tort liability, but also from reasonable limits on their operations.