3 Comments

Drunk drivers are mentioned as being included in the human-based fatality rate that (unlike for automated systems) is available. Is another relevant point that currently the robotaxi companies are limiting their operation to low-speed zones, so the comparable fatality rate (for humans in the same operational design domain) is again even lower?

Expand full comment

Yes, risk of fatality drops dramatically at lower speeds. That starts going away on highway runs to the airport.

Additionally, using ride hail drivers tends to elevate the baseline comparison risk vs. the general population.

If done well the comparisons should take all this into account. Some companies aspire to do well. Some companies are just blowing smoke.

Expand full comment

“Mass common-cause failures of computer-based systems cannot be ignored as a possibility…”

I agree, and think that this is a big point. If a human driver behaves in a way which leads to catastrophe, society limits blame to that driver, and does not presume that their actions reflect onto other drivers. (Hence the term “accident”.)

If a robot driver commits the same action, leading to catastrophe, what would distinguish that robot from all other robots, at least of a certain class? After the pedestrian dragging incident in SF, many/all Cruise vehicles were removed from service, not just the one which dragged the pedestrian. I don’t know that we label a robot harming a human as an “accident”. (How often does SkyNet, of Terminator fame, appear in your lectures? It appears in some of mine! 😁)

But Waymo was seen as distinct from Cruise, and let’s imagine there is some objective criteria by which Waymo is “better”. Should any AV class be allowed to operate if they are not the best? The pedestrian is not party to the market contract between AV provider and passenger, but still they face a risk over which they have no control. We have a legal system in place which addresses the heinous human actions of individual drivers, but I don’t see that carrying forward to individual robots. I posit that Asimov’s First Law of Robotics will describe the cultural, and eventually legal, norm.

This week I’m giving a speech regarding organizational dysfunction which led to minor (compared to AV) software lapses in the GM Ignition Switch and Boeing 737 Max MCAS crashes. Imagine a sociopath introducing a virus into a control system which randomly and rarely killed people. If caught, we have a means to deal with this person as an individual. But If organizational dysfunction of a similar magnitude is found, all products of that class are removed from operation. (GM and Boeing had recalls/groundings, and Boeing has yet to recover.)

Is there a viable business model in AVs? How would you calculate this risk?

Expand full comment