Robotaxis and Personal Safety
Do we give the computers command authority to intentionally hurt road users for the purpose of self-defense of occupants?
Robotaxi safety discussions often do a deep dive into crash risk and comparative injury data analysis. But personal safety is also a concern. If robotaxi deployments are to succeed at scale, it is important they ensure acceptable personal safety. Some riders are motivated to take robotaxis to avoid the risk of an issue with a human ride hail driver. But robotaxi rides present other risks to personal safety. This point was recently highlighted by a spate of vandalism of occupied robotaxis. In another recent incident, a robotaxi was stopped by men harassing a female occupant.
The idyllic picture of a vulnerable passenger being safely cocooned in a robotaxi without a human driver neglects some key personal safety concerns. It might be tempting to write off early harassment incidents as comparatively harmless pranks, nuisances, or growing pains. However, they are indications of a fundamental personal safety issue that needs to be addressed: there is nobody with command authority to override automated safety protocols in an emergency, and bad actors can exploit this fact.
An important feature of a personally owned conventional vehicle is having more control over personal safety. A locked private vehicle provides a measure of physical protection against potential threats. The solo occupant/driver does not have to share a vehicle with a stranger as would be the case as a passenger in a conventional taxi or ride-hail service.1 A common advocacy talking point is that a single-occupancy robotaxi extends this safety benefit to those who cannot drive themselves or do not have the resources to own a private vehicle.
But the reality is more complex.
Personal safety extends past just riding solo to include embarking at a safe location, selecting routes that lower personal risk, deciding not to exit the vehicle at a preselected destination that turns out to look dangerous, and not being stranded in an undesired location. These points are known to be problems with at least some robotaxi rides.
Another aspect of personal safety is what happens when other road users threaten the vehicle occupant. (To be sure, threatening or harming a robotaxi or its occupant is unacceptable behavior! But we can’t pretend it won’t happen. And with recent reports, it is time to stop kicking that can down the road.)
Occupied robotaxis might present a tempting target for the harassment of occupants, because harassers know that the computer driver has been programmed to avoid injuring pedestrians. Just stand in front of a robotaxi and you can do pretty much whatever you want to an occupied robotaxi until the cops arrive — if they have been called, and if they have the resources to respond to yet another such call.
A robotaxi rider inside a vehicle being subjected to harassment or vandalization is in a scary position. Exiting the vehicle presents the risk of direct exposure to the bad actors. Staying in the vehicle might work out — or might not. Recall that an unoccupied robotaxi got itself stuck in a crowd and was set ablaze in February. (To be sure, that robotaxi was unoccupied. However, simply knowing this has happened is going to reasonably create anxiety for a trapped rider, regardless of anyone’s opinion as to whether that incident would happen to an occupied robotaxi.)
There are many different ways to look at this, but I think the core issue is one with much broader implications: lack of command override ability with regard to safety protocols.
Why is it that harassers are emboldened to block a robotaxi that would not do that to a human-driven taxi? One expects it is because they know a human driver would try to — or succeed at — bumping them out of the way, running them over in self-defense, backing away, or swerving around, even if there were some risk to a potential attacker from the vehicle performing an evasive maneuver. And a harasser or attacker expects that remote assistants who lack steering wheels really can’t do anything except call police, who will take a while to arrive.
In the moment, the attacker has all the power. This power imbalance is fundamentally different than the situation with a human-driven vehicle.
Human drivers have the ability to override road rules, safety protocols, and everything else if they wish to do so. They will be held accountable for any losses after the fact, so this power must be used with care! But the fact remains that human drivers have the power to purposefully ignore safety considerations in exigent circumstances, especially if they can successfully argue that they acted in self-defense.
Robotaxis lack both a sense of self-preservation and proper accountability,2 so giving them the power to do a command override of safety protocols is highly problematic. If you thought ethical quandaries regarding no-win crash scenarios were tough, imagine a discussion on whether we should tell a robotaxi it is OK to run someone over on purpose because they present a threat to occupants. (With only slight exaggeration: “So sorry to have run over a kid in a crosswalk — the computer driver thought they were threatening the occupant, so it was self-defense. Since in this state the computer is legally the driver, there is nobody to sue or send to jail. We’ll see if we can fix that bug for next time.”)
Or we could let the passenger take responsibility for command override. But — should a 14 year old be able to press a panic button to enable potentially dangerous behavior? How about a 17 year old? Should it matter if that person has a driver license? How about an 8 year old riding solo who feels scared? How about a drunk passenger? Authenticating legitimate invocation of a panic requests gets problematic in a hurry, and doesn’t solve the problem for the most vulnerable passengers.
So what’s the personal safety plan for robotaxis? Do we give the computers command authority to intentionally hurt road users for the purpose of self-defense of occupants? Do we do something else? Will that something else work in practice? Who/what takes responsibility?
At least the following elements need to be addressed as part of a wider look at personal safety:
Avoiding dropoff areas the passenger feels are unsafe.
Avoiding route portions the passenger feels are unsafe.
Deterring, avoiding, and evading attacks (harassment, property damage, threats of personal harm).
Avoiding sharing a ride in a way any passenger feels is unsafe.
Avoiding being stuck in a robotaxi, whether due to equipment malfunction, a poor route choice (e.g., driving into a violent demonstration), routing failure, or other reasons.
Being able to leave a dangerous area (for example, an active shooter scene or area with an unruly public gathering) even if doing so presents danger that would in other situations be considered mildly unacceptable.
To be sure, resolving these issues is not easy. Technologists just love themselves a purely tech solution. But pure tech solutions are notoriously brittle in dealing with a messy socio-technical problem such as harassment and personal safety.
A solution should recognize at least the following factors:
Personal safety is especially important to more vulnerable demographic segments, particularly when traveling alone, such as women, the elderly, and children. Also potentially at risk are identifiable minority groups in areas prone to abusive behavior based on race, gender, ethnicity, religion, or other factors.
The perception of safety to occupants is likely to matter more than statistical physical harm outcomes. Being terrorized as a passenger exacts a very real toll that does not show up in injury and fatality statistics. But it does make for high-profile negative news cycles, and potential negative public sentiment.
Human-driven ride hail and taxi services have issues with personal safety as well.3 Robotaxis don’t have to be perfect, but they need to be at least competitive on personal safety. Pretending robotaxis are a panacea for personal safety will blow back after a small number of high-profile incidents.
Not every incident will end happily. The bar might be set as to what a reasonable human taxi driver might have done in a comparable situation, not perfection.
As with other aspects of safety, there will be public perception issues if robotaxis fail differently than human drivers. Each incident will be scrutinized as to whether a human driver would have done better. It will be difficult to book-keep the wins and losses, and losses will get more weight than wins. This likely sets the bar higher for robotaxis than for human-driven taxis. That might not seem fair to robotaxis, but denying that reality means losing the argument about personal safety while the discussion is just getting started.
Safety of occupants is a higher priority than avoiding equipment damage. There will be an obligation to attempt to avoid harm to harassers to the maximum degree practicable while keeping the passenger safe. Lawyers will need to be involved in policy decisions on this point.
Not every passenger is competent to issue a safety protocol override command. This might be due to age, inebriation, or other factors. If the passenger is there because they should not be driving, then giving them command override authority is problematic. But having no command override capability for passengers might also be problematic.
Remote driving as well as remote assistance have their own problems and will not provide a complete solution. It is doubtful they will provide a good enough solution as remote assistance is currently practiced, because they attempt to deny the involvement of remote operators as being relevant to driving safety decisions.
A related problem of on-demand emergency egress needs to be solved. Passengers need the ability to exit at an unscheduled location if they feel the need to do so, such as for example if the robotaxi catches fire due to a battery malfunction or because they are in some sort of personal distress. But passengers might not be competent to judge the risks involved, or even know an evacuation is advisable.
As to mechanisms to accomplish these goals, one can envision a wide variety of strategies: companies ignoring this issue as long as possible, blaming regulators for lax oversight, blaming harassers for bad behavior, blaming passengers for not calling e911, providing remote operators with the ability to over-ride normal safety protocols, providing remote operators with driving controls for emergency use only, providing in-vehicle driving controls for passengers, providing an in-vehicle panic button, vehicle-mounted countermeasures (super-bright lights, painfully loud noisemakers, more?), aggressive legal prosecution of harassers, up-armoring vehicles, pressing for enhanced criminal penalties for harassing robotaxis, and so on. None of these is going to be a silver bullet, and I am not advocating for any one approach in particular.
Probably it will take a multi-prong approach executed with finesse. Or maybe this just turns into yet another social problem that is ineffectively managed. Time will tell. However, this is the sort of thing that can easily get out of hand as the technology proliferates from a novelty to an everyday part of society.
If the robotaxi industry wants to scale up with success, they need to figure out a proactive strategy for ensuring personal safety.
(Portions of this piece are derived from a 2022 article that emphasized start and end of trip safety, which in turn was an adaptation of Section 4.1.3 of my book: How Safe is Safe Enough? Measuring and Predicting Autonomous Vehicle Safety.)
A popular meme goes something like this: Years ago we were told not to get into cars with strangers and not to talk to strangers on the Internet. Now we literally contact strangers via the Internet so we can get into their cars.
Lack of accountability for harm due to dangerous robotaxi behavior is something the autonomous vehicle industry has sunk years of effort and lots of money pursuing, with significant success.
While ride-share companies recognize that personal safety is a key issue, and put effort into improving it, personal safety needs more work. See Marshall 2019: https://www.wired.com/story/criminologist-uber-crime-report-highly-alarming/
Also Saddiqui 2021: https://www.washingtonpost.com/technology/2021/10/22/lyft-safety-report/
Excellent summary of the issue. So who's going to write the Personal Safety section of the Concept of Operations for robotaxis? Oh, wait ...
Great article Phil. Everyone thinking of getting into a robotaxi should be required to read it first.