Discussion about this post

User's avatar
Phil Koopman's avatar

Reuters has an article that takes a look at various incidents that have been reported with Tesla robotaxi driving behavior in Austin:

https://www.reuters.com/business/autos-transportation/teslas-robotaxi-peppered-with-driving-mistakes-texas-tests-2025-06-25/

Expand full comment
Robert Thibadeau's avatar

I have never heard this: "But we were promised that robotaxis would not make human driving mistakes" Please tell me who said this? I have honestly never heard it claimed by the self driving car guys. What I heard was that they would make fewer human driving mistakes, and that overall they would be safer. I find this complaint making up a lie to make a fallacious case. Sadly this kind of 'exaggeration' just sillifies the laws. Do we make it illegal to make any mistake a human might make? (Drunk or a rat shorting a wire?). In any event, the second measure is measurable: "Overall safer." Does Robotaxi work like uber in terms of getting a ride? With uber you have to say where you get picked up and where you are going. I would assume, like uber, sometimes you will be disappointed with no uber because the no uber driver will take the offer. I spent a lot of time on Ubers (or Lyfts) and learned where and why to have a friend drive me. (Never try to cross the river in Boston city proper to Cambridge or vice versa). Teslas were popular and a good FSD guy (/girl) was pretty amazing at how they could anticipate where the automation was likely to make mistakes. I am certain these systems must have a way to risk-evaluate routes. I would love to be able to order a robotaxi if it can find a safe(r) route. I know when I am driving, I often take the longer, less stressy, route just to avoid the possibility of something bad happening (like in Atlanta, DC, or Southern California). I can't wait to have a voice Grok for my robotaxi so I can ask questions and give advice. (Hi Phil).

Expand full comment
2 more comments...

No posts