On point, technically astute, and sadly impractical. Implementation of even a subset of these suggestions would require a government that cares about the welfare of people who live here, and that is definitely not the case right now. (Recommended search terms: measles, severe weather warning, Boeing, Chevron deference.)
I think that your layered construct provides needed clarity for AV safety establishment. It may need support from a complementary regulatory Layer 0 that comprises down and dirty minimum design requirements.
Layer 0 should be based on current knowledge and heritage from other technologies, perhaps needed to provide the foundation for the other Layers. For example, the performance of any AV should be inspectable (to government officials or fleet operators/individual owners) both for conformance to mechanical/visibility/lighting/annunciator status like current vehicles but also for conformance to design intent for logical safety- or life-critical functionality, e.g. inspectable records of whether operational deviation from planned trajectories conforms to the control system’s design requirements. The current practice of data hide and seek cannot be allowed, and should be excluded at Layer 0, and that would support Layers 1-4. This transparency would be needed to underlie your Layer 1, particularly where safe AV operation might be on a knife’s edge that is detectable before someone is killed by auditing actual performance against design limits.
Another Layer 0 parameter might be conformation that AV design validation encompasses known instances of AV failures that resulted in injury or death. (Clearly, Waymo did not learn from Cruise.)
Another Layer 0 example could be redundancy standards for safety- and life-critical logic. For example all AV occupant safety-critical logic-driven functionality is (perhaps) single fault tolerant within the ODD, all life-critical logic-driven functionality is (perhaps) dual fault tolerant within the ODD, and safety-critical functionality with regard to vulnerable road users (perhaps) 3 fault tolerant.
As an aside, statistical arguments for AV safety should include estimated parameter uncertainty compared with the uncertainty of the comparable human-driven uncertainty, with no credit given for benevolent intent.
My favorite Level 0 is conformance to industry consensus safety standards, even if only self-certified. That builds a paper trail for other levels to audit in the event of mishaps. Your suggestions overlap this idea as well.
It is all well reasoned and well argued. But tort liability in a system based on common law and public jury would simply not work for autonomous drivers. Most people have an irrational fear of self driving cars.
It currently "kind of works" with humans because most people have limited means of paying back damages, so it reverts back to the insurance case. Suing deep pocketed companies would bring out a cottage industry of predatory lawyers
Thanks for the feedback. I suspect any irrationality component to the fear will fade over time as people get used to the technology.
While the predatory lawyer trope is a common concern, that is the cost of having a regulatory system that does not require type approval to establish a presumption of safety. In Europe liability exposure is much lower because they have type approval. If you have neither type approval nor significant liability exposure, then there is in effect no consequence for putting an unsafe car on the road, robotaxi or otherwise.
On point, technically astute, and sadly impractical. Implementation of even a subset of these suggestions would require a government that cares about the welfare of people who live here, and that is definitely not the case right now. (Recommended search terms: measles, severe weather warning, Boeing, Chevron deference.)
Thank you Phil.
I think that your layered construct provides needed clarity for AV safety establishment. It may need support from a complementary regulatory Layer 0 that comprises down and dirty minimum design requirements.
Layer 0 should be based on current knowledge and heritage from other technologies, perhaps needed to provide the foundation for the other Layers. For example, the performance of any AV should be inspectable (to government officials or fleet operators/individual owners) both for conformance to mechanical/visibility/lighting/annunciator status like current vehicles but also for conformance to design intent for logical safety- or life-critical functionality, e.g. inspectable records of whether operational deviation from planned trajectories conforms to the control system’s design requirements. The current practice of data hide and seek cannot be allowed, and should be excluded at Layer 0, and that would support Layers 1-4. This transparency would be needed to underlie your Layer 1, particularly where safe AV operation might be on a knife’s edge that is detectable before someone is killed by auditing actual performance against design limits.
Another Layer 0 parameter might be conformation that AV design validation encompasses known instances of AV failures that resulted in injury or death. (Clearly, Waymo did not learn from Cruise.)
Another Layer 0 example could be redundancy standards for safety- and life-critical logic. For example all AV occupant safety-critical logic-driven functionality is (perhaps) single fault tolerant within the ODD, all life-critical logic-driven functionality is (perhaps) dual fault tolerant within the ODD, and safety-critical functionality with regard to vulnerable road users (perhaps) 3 fault tolerant.
As an aside, statistical arguments for AV safety should include estimated parameter uncertainty compared with the uncertainty of the comparable human-driven uncertainty, with no credit given for benevolent intent.
Thanks for the additional thoughts Fred!
My favorite Level 0 is conformance to industry consensus safety standards, even if only self-certified. That builds a paper trail for other levels to audit in the event of mishaps. Your suggestions overlap this idea as well.
It is all well reasoned and well argued. But tort liability in a system based on common law and public jury would simply not work for autonomous drivers. Most people have an irrational fear of self driving cars.
It currently "kind of works" with humans because most people have limited means of paying back damages, so it reverts back to the insurance case. Suing deep pocketed companies would bring out a cottage industry of predatory lawyers
Thanks for the feedback. I suspect any irrationality component to the fear will fade over time as people get used to the technology.
While the predatory lawyer trope is a common concern, that is the cost of having a regulatory system that does not require type approval to establish a presumption of safety. In Europe liability exposure is much lower because they have type approval. If you have neither type approval nor significant liability exposure, then there is in effect no consequence for putting an unsafe car on the road, robotaxi or otherwise.