I just finished a youtube on your subject. We agree but have slightly different takes on the takeaways. Here is the full thing...free, no signups, and can be sent around. But...it's an hour to watch. I call it "Full Self-Driving Horses" https://www.youtube.com/watch?v=G4gjsCa65cY
I agree with most of what you are saying here, but I think your #3 objection seems to assume that what matters most is processing power, and that draws a false analogy between people and technology. It is certainly true that people could never process the lidar, camera, and radar data at the same speed as an AV, but we make up for that by processing information a lot smarter. We can therefore anticipate troubles on the road before they occur, and you can see that ability at work when Tesla drivers take over from FSD.
I agree with everything else, and I have always argued that because it is impossible for humans to oversee fully automated driving competently, it is imperative that people continue to do part of the driving task. That is, we should just not allow the automation to steer the car, only let it break and accelerate. That forces people to remain engaged with the driving, and will allow them to take over when something goes awry with the breaking and accelerating. By the way, driving like this is still much more relaxing than driving without automation, thus is still a good consumer value. It has the additional advantage that it is much easier to understand what the automation does, so you can tell when it is going wrong.
I took some Waymo in SF rides recently and it really did quite brilliantly for over an hour. But then, all of a sudden, it drove down the wrong side of the road, passing a row of cars waiting for a traffic light. Of course we could not do anything, but it goes to show that these systems even when fantastic in the majority of situations, are not to be trusted completely.
Erik, Thanks for adding additional, quite reasonable points to the conversation.
The reaction time topic must be taken a bit more narrowly. If you allow automation to act with a narrower margin for error than a human operator to increase efficiency, you leave insufficient error margin for the human to react and correct a mistake. The point here is indeed that doing so is a bad idea. More compute power doesn't help you when the computer makes a mistake and the person does not have time to react.
Indeed there are smarter ways to run systems. This essay is about giving folks a grounding in classical issues that were recognized long before automated driving was a commercial product.
This video is fantastic! A great edge case. Definitely plan to show it in my class. Overall, though, I am impressed by the amount of work Waymo seems to have put in.
I just finished a youtube on your subject. We agree but have slightly different takes on the takeaways. Here is the full thing...free, no signups, and can be sent around. But...it's an hour to watch. I call it "Full Self-Driving Horses" https://www.youtube.com/watch?v=G4gjsCa65cY
I agree with most of what you are saying here, but I think your #3 objection seems to assume that what matters most is processing power, and that draws a false analogy between people and technology. It is certainly true that people could never process the lidar, camera, and radar data at the same speed as an AV, but we make up for that by processing information a lot smarter. We can therefore anticipate troubles on the road before they occur, and you can see that ability at work when Tesla drivers take over from FSD.
I agree with everything else, and I have always argued that because it is impossible for humans to oversee fully automated driving competently, it is imperative that people continue to do part of the driving task. That is, we should just not allow the automation to steer the car, only let it break and accelerate. That forces people to remain engaged with the driving, and will allow them to take over when something goes awry with the breaking and accelerating. By the way, driving like this is still much more relaxing than driving without automation, thus is still a good consumer value. It has the additional advantage that it is much easier to understand what the automation does, so you can tell when it is going wrong.
I took some Waymo in SF rides recently and it really did quite brilliantly for over an hour. But then, all of a sudden, it drove down the wrong side of the road, passing a row of cars waiting for a traffic light. Of course we could not do anything, but it goes to show that these systems even when fantastic in the majority of situations, are not to be trusted completely.
https://www.youtube.com/watch?v=LOdnrmIuNww
As always, I appreciate your thoughtful work.
Erik, Thanks for adding additional, quite reasonable points to the conversation.
The reaction time topic must be taken a bit more narrowly. If you allow automation to act with a narrower margin for error than a human operator to increase efficiency, you leave insufficient error margin for the human to react and correct a mistake. The point here is indeed that doing so is a bad idea. More compute power doesn't help you when the computer makes a mistake and the person does not have time to react.
Indeed there are smarter ways to run systems. This essay is about giving folks a grounding in classical issues that were recognized long before automated driving was a commercial product.
This video is fantastic! A great edge case. Definitely plan to show it in my class. Overall, though, I am impressed by the amount of work Waymo seems to have put in.