2 Comments

Hello and thank you for this article !

A great input in all the ongoing work on hypervision/supervision/remote control ...

For information in France, the administration & industry have settled the framework for AV commercial services : The remote operator has to have a specific & dedicated training. To access these training, the operator has to have the driving licence of the vehicles he/she supervises.

I haven't yet understood your pedestrian exemple : maybe a prototype of AV may miss it. But how can an AV that has demonstrated a safety concept to be allowed on a commercial service on open road miss an obstacle crossing just in front of the vehicle ? Shouldn't it be a clear failure of the self-driving system, whatever the remote operator advice ?

Let's say it was at the edge of the ODD (at night, with building ...) and the AV asked the remote center

Open discussion & thinking :

1 - The vehicle is driving and is responsible for the driving safety,

2 - The remote control center module interacting with the driving tasks may ask the vehicle to restart but the vehicle may keep a robotic simple collision avoidance

-> if the platform says to the ADS "the door is closed" doesn't mean the platform is responsible of driving, the ADS keeps the responsability to move with this information

-> if the remote operator says "path is ok on this side" maybe the ADS should still be able to see and hard brake if necessary. Remote operator has overpassed the "comfort driving rules" but not the core anti-collision rules ?

although I see the AV as the only responsible for the safety of the driving (at the AV level) (but I love to be challenged), control center plays a role in safety for sure !

1° [system design] The control center is (as for driven operated mobility services) responsible for the operation (safety at the operation level : technical system applied on one location), including the passenger security, and the quality of service.

We often talk about pure driving (angle of the ADS or OEM) but if we analyse safety at the operational level, we see some additional Unexpected Events

eg : The vehicle goes into a dangerous street (riots, industry leaks ...) -> the AV will stop in front of the people in the street but having hundreds of demonstrators around the AV may be a unsecure situation for passenger -> We should have stopped the vehicle even before it can see what's in that street.

...

2° [operating procedures] control center is ensuring that all the procedures are correctly applied at any time

3° [continuous improvement] control center is ensuring constant monitoring, reporting and comparison between operations

Expand full comment

William: thanks for the comments.

First, in the US there is no requirement to demonstrate safety at any granular level to a regulator. But let us set that aside...

If a remote assistant might be consulted, that presumes the vehicle cannot handle the situation on its own. Right now those remote assistants are very definitely involved in driving decisions, so they play a role in safety. The fact we had a mishap involving a remote assistant telling a robotaxi that a red traffic light was green tells us all we need to know. But even if the remote advice is not intended to override vehicle safety decisions as overtly, what if the robotaxi has a borderline probability of something and the remote assistant resolves the ambiguity? That amounts to telling the robotaxi what to do. This gets problematic in a hurry. Thus the essay.

If remote assistants are limited to passenger interactions, indeed that is a very different role as you suggest.

Expand full comment