Do you have evidence for "clearly more dangerous"? Because what numbers exist say the opposite. They are now pushing a million (!) of these vehicles on US roads and to my knowledge the bay bridge accident[1] is the only example of a significant at fault accident involving the technology.
It's very easy to make pronouncements like "one accident is too many", but we all know that's not correct in context. Right? If FSD beta is safer than a human driver it should be encouraged even if not perfect. So... where's the evidence that it's actually less safe? Because no one has it.
[1] Which I continue to insist looks a lot more like "confused driver disengaged and stopped" (a routine accident) than "FSD made a wild lane change" (that's not its behavior). Just wait. It took a year before that "autopilot out of control" accident was shown (last week) to be a false report. This will probably be similar.
- A Tesla near that drive into the median divider near Sunnyvale on Highway 101, because it thought the gore point was an extra lane. Split the car in half and killed the driver. [1]
- A Tesla that autonomously ran a red light and killed two people. [2]
- Multiple accidents where Teslas have driven into parked emergency vehicles / semi trucks.
Is it quantitatively safer than human drivers? I don't have that data to say one way or the other. But it's not correct to say the Bay Bridge is the only at fault accident attributable to autopilot.
Those are autopilot accidents from before FSD beta was even released.
I mean, it's true, that if you broaden the search to "any accident involving Tesla-produced automation technology" you're going to find more signal, but your denominator also shrinks by one (maybe two) orders of magnitude.
These show pretty unequivocally that AP is safer than human drivers, and by a pretty significant margin. So as to:
> Is it quantitatively safer than human drivers? I don't have that data to say one way or the other.
Now you do. So you'll agree with me? I suspect not, and that we'll end up in a long discussion about criticism of that safety report instead [edit: and right on queue...]. That's the way these discussions always go. No one wants to give ground, so they keep citing the same tiny handful of accident data while the cars keep coming off the production line.
It's safe. It is not perfect, but it is safe. It just is. You know it's safe. You do.
This data does not show what it pretends to show. The underlying statistics are heavily skewed in favor of Tesla by various factors:
* Autopilot can only be enabled in good conditions and relies on human drivers in all other, to the point of handing over to a human when it gets confused. Yet, they compare to all miles driven in all conditions.
* Teslas are comparatively new cars that have - as they proudly point out - a variety of active and passive safety features that make the car inherently less prone to accident than the average, which includes old, beat up cars with outdated safety features. Teslas are also likely to be better maintained than the average car in the US by virtue of being an expensive car driven by people with disposable income.
* Teslas are driven by a certain demographic with a certain usage pattern. Yet, they provide no indicator on how that skews the data.
Tesla could likely provide a better picture by comparing their data with cars in the same age and price bracket, used by a similar demographic, in similar conditions. They could also model how much of the alleged safety benefit is due to the usual, active and passive safety features (brake assistance, lane assist, …). They don‘t, and as such, the entire report is useless or worse, misleading.
All very true, by far the biggest factor is the first bullet. Autopilot is enabled exactly when the driver thinks it is very safe and easy driving. If we are comparing miles without an accident it shouldn't be compared to all miles, it's much more like all miles with cruise control activated, data which Tesla doesn't have and probably can't get, but I don't think they'd want to trumpet it either.
It should be blindingly obvious that FSD is not safer than a human right now from using it or watching it be used. It's weird that people are trying to point to some very misleading statistics when you can go to YouTube and see serious problems in almost every video about FSD, even those by owners clearly wanting to paint it as great. Of course getting data-based answers is great but when you see problems every couple miles and the data is saying is saying it's way safer than a human, you are doing something wrong with or using irrelevant data.
> when you see problems every couple miles and the data is saying is saying it's way safer than a human, you are doing something wrong with or using irrelevant data.
That's right, you are doing something wrong. It's a supervised system. There's a driver in the loop, and yes: the combination is safer.
You're engaging with another favorite trope of Tesla argumentation: pretending that a system that isn't actually deployed as fully autonomous actually is. Safety is about deployed systems as they exist in the real world. Your point is that in a alternate universe where Teslas all ran around without drivers, they might be less safe. And... OK! That's probably true. But that's not a response to "Teslas are safe", is it?
They're safe. Just admit that, and then engage in good faith argument about what needs to change before they can deploy new modes like full autonomy. Lots of us would love to have that discussion and provide detailed experience of our own, but we can't because you people constantly want to ban the cars instead of discuss them.
It's called FSD as in Full Self-Driving. If it does one single "accident" where it's unable to see a totaled car with hazard lights on right in its face [1], then it's not really full, is it now. Not to mention the hilarious contradiction in terms: Full, but Beta.
No one would have any major issues if this (self)killing system was called "Extended Driver Assist" and it had implemented the minimal safety features like driver monitoring, safely stopping the car if the driver doesn't pay attention to the road.
I don't believe one accident is too many. I made my statement based on videos I've seen of people having to disengage their beta FSD in circumstances where a human driver would have no trouble.
Now, maybe the data says otherwise. If that is the case, then great! Let's role out some more FSD beta. But for that data to be valid, you have to account for the fact that Tesla filters bad drivers out of the pool of FSD users. And as I understand it there is not public data about the risk profiles of the people Tesla lets use FSD.
It's very easy to make pronouncements like "one accident is too many", but we all know that's not correct in context. Right? If FSD beta is safer than a human driver it should be encouraged even if not perfect. So... where's the evidence that it's actually less safe? Because no one has it.
[1] Which I continue to insist looks a lot more like "confused driver disengaged and stopped" (a routine accident) than "FSD made a wild lane change" (that's not its behavior). Just wait. It took a year before that "autopilot out of control" accident was shown (last week) to be a false report. This will probably be similar.