Hacker News new | past | comments | ask | show | jobs | submit login
Tesla recalls 360k vehicles, says full self-driving beta may cause crashes (cnbc.com)
727 points by jeffpalmer on Feb 16, 2023 | hide | past | favorite | 814 comments



There really needs to be a distinction between an actual recall (my Model S had one to change the latch on the frunk) and this type of "recall" that is nothing more than an OTA software update.


Why? It involves a safety system! That needs to be tracked publicly and updated! Nothing more than an OTA - does not mean much when everything is fly by wire and a bug could mean your car does not stop accelerating or something.


Recall implies something being returned. This should be called a mandatory software patch, or something like that.

The problem is Tesla owners repeatedly see "recall" to mean "software update", so this might lead to a lot of confusion if a physical recall is actually required in the future.


I like that we're arguing over the word "recall" being an issue when the system it applies to is called "full self driving". I think calling a level 2 driver assistance system "full self driving" is a much bigger inaccuracy.


Rebrand the "full self driving" marketing to "driver assist optimist".


To play devil's advocate, it's only a misleading label when the word "Beta" is omitted.


Not really. If someone says "Full self driving beta" I'm thinking "The goal is full self driving, the current approach should deliver full self driving, beta means that it requires some tweaks and testing".


Nobody outside of tech understands it this way. "Full Self-Driving Beta" is a marketing stunt that borders on fraud. These cars can't drive themselves for shit, they're not even in beta.


You own/owned a Tesla with FSD Beta, right? You have driven it extensively to see and understand its capabilities and how it's improved, right? I've driven FSD Beta now for more than a year. Based on my experience you have no clue about which are speaking.


You've misunderstood me. I agree with you.


Oh, ok, yes I did.


> I'm thinking "The goal is full self driving, the current approach should deliver full self driving, beta means that it requires some tweaks and testing".

And that "bugs" involve accidents.


Reminds me of an old joke about Ford vs Microsoft. We seem to have come full circle. Abridged version:

> At a computer expo, Bill Gates compared the computer industry with the auto industry and stated, 'If Ford had kept up with technology like the computer industry has, we would all be driving $25 cars that got 1,000 miles to the gallon.'

> In response to Bill's comments, Ford issued a press release stating:

> “Sure, but would you want to drive a car that for no reason whatsoever would crash twice a day?”

https://www.chevyhhr.net/forums/lounge-10/microsoft-vs-ford-...


"Beta" just means there are likely to be bugs, not that the intention isn't (level 4/5) "full self-driving".


It seems to me that calling it a "recall" emphasises the severity of the problem, which might make it easier to argue that a lot is being done for customer safety to the interested authorities. But I don't think Tesla wants this to be seen more than an OTA update from the perspective of their customers (at least those who ignore Tesla news).

I had a VW car that was "recalled" shortly following the emissions scandal. The dealership asked me to come in for a free software update related to emissions. So you can say it's a "recall" to the lawmakers but call it a "free software update" to the user.


The industry needs to come up with a new related term like "Software Recall"

Recalling the hardware is a drastically more difficult request to impose on customers and financially/logistically for the car maker.

That's a disporportate response just to highlight importance of an OTA update.


> Recalling the hardware is a drastically more difficult request to impose on customers and financially/logistically for the car maker.

And the distinction matters to consumers because...?

A component is faulty. It needs to be fixed. Whether or not you have to drive to a dealership, if it's OTA, if someone at a dealership needs to plug a specialized device to your car's OBD port, or the car is unfixable and needs to be melted to slag and you get a new one doesn't really matter. There's an issue, it is a safety issue, and it needs to be fixed.

How efficient the process can be it's another matter entirely. That's up to the manufacturers.


> Whether or not you have to drive to a dealership, if it's OTA, if someone at a dealership needs to plug a specialized device to your car's OBD port, or the car is unfixable and needs to be melted to slag and you get a new one doesn't really matter.

As a car owner, those scenarios are drastically different to me. I have a hard time imagining anyone saying "It doesn't really matter to me if my car receives an OTA update or if I need to drive 2 hours to a dealership or if my car is melted to slag."


If you had to send in your cellphone each time there was an android update vs all IOS updates being OTA, I think you would see the distinction as a consumer.


> And the distinction matters to consumers because...?

Because in one I have to book a time with a dealer and take half a day off of work and in the other I have to do... nothing and it will just fix itself.


I would say that whether a "recall" requires some action on the part of the owner is a very important distinguishing factor.

A recall should unambiguously mean that some action from the owner is required to resolve the issue (e.g. taking it to a dealer to get a software update installed.)

If no action is required (other than caution / not using the product feature), we should use some other term such as "safety advisory" to avoid ambiguity around critical safety information.


> A recall should unambiguously mean that some action from the owner is required to resolve the issue (e.g. taking it to a dealer to get a software update installed.)

Should, perhaps. But recall has a meaning with legal implications, so it matters. A recall requires the fix status to be tracked and reported for example, whereas a random OTA updated does not.


Recall is just the wrong word for what this is.


'full self driving' is an even more incorrect term, then, if you want to be pedantic. if the car mfg takes zero liability/accountability, then it is zero self-driving.

you can, in fact, 'recall' software. this is semantically accurate description of what is happening.


It's a full self driving 'beta' though, it's literally in the name that it isn't final and doesn't have the bugs worked out. You also have to opt in to it.


Wait is your argument adjectives can only be applied to Tesla marketing terms, and not to other English words?

What do you think “FSD recall” or “software recall” means

Tesla released software that can kill people. It must do a FSD recall

This is not the same as Apple doing an OS update, not mandated by a regulatory body because it could kill someone

I was hit by a driver not paying attention. I had to get two surgeries, was not allowed to stand for 9 months, spent another year just rehabbing my nervous system and learning to walk. I still can’t jump or run

Tesla released software it’s marketing as “full self driving” that is not even Level 2 Alpha! Full Self Driving implies a level 5 system.

Tesla’s software has disclaimers like “may not behave as expected in sunlight”. Even its AutoPilot has these disclaimers.

It constantly claims FSD makes drivers safer. Yet it’s non transparent with its data, the data & comparisons it does release is completely misleading to the point of fraud, comparing apples to oranges. The cars it’s released on public roads to untrained drivers runs through intersections (it’s all over YouTube) and fails all kinds of basic driving tests. Tesla accepts ZERO responsibility if someone is killed while FSD is activated… that’s how little confidence it has in its own product

This is a product that can maim and kill humans, ruin people’s entire lives… and your response is incoherent mumbling about adjectives ?

Any person who is majorly confused by what a “software recall” is… or can’t figure out what this FSD recall means for them, shouldn’t be beta testing a 2-ton machine on public roads. They shouldn’t be driving period.


I disagree, you are making one interpretation of what "Full Self Driving Beta" means to everyone, as if everyone using a beta, they have to opt-in, and purchase, and have a good driving record (indicating they understand the rules of the road), is only looking at it like a headline article they don't read. 'You imagine everyone is dumb and you are the only smart person who looks beyond the name of something. You are the one caught up in the marketing yourself while the people actually making the decisions to spend their money on this and opt in are the ones actually looking into its. You claim that they take no responsibility is incorrect as they offer their own insurance if so chosen. If insurance companies don't want to cover it, they can easily just not cover Teslas but that isn't happening. Insurance companies still cover it because Tesla is safer than other vehicles. "Full Self Driving" doesn't mean perfect driving, it means as good or better than the average human and that bar is not that hard to pass.


I don't see how calling it a Software Recall will soften the blow for the car's user when the user has to drive it to a place where a device can be plugged in to do the update.

With a Recall in the normal sense, isn't there a record that the car has been updated? How is this done if the car is kept fully available to the user?


I had a similar recall with my washing machine. It was literally updated automatically before I even knew a recall existed.


Why the hell does your washing machine need internet access?


Don't really need it but it sends notifications when the wash is done, lol.


#internetofshit solving problems you never had, one insecure device at a time.


But other than Tesla, most carmakers still require a return to the dealer for a software update.


Mandatory Software Update?


If you want owners to understand that it's a serious safety issue, the word "recall" won't help. Most recalls are for minor, non-safety-related issues. My car has had a few recalls, and none were urgent, just things that got replaced for free the next time I brought my car in for service.

"Critical safety defect" would be better.


Then they weren't recalls, they were TSBs


There is a big difference between taking a car to a dealership for them to apply an update, and the car updating itself overnight as it sits in the garage with no action required by the owner.


I work in the medical device space, and we will often have "recalls", which usually result in a software patch. Recall != return.


Are these delivered directly to the device and installed overnight automatically (like an iPhone iOS update) or do they have to be hooked up to a computer to install the software update?


Recall is the word we use when a defect was found and must be repaired or retrofit. The actual process of repair could involve customer action or not.

Teslas are basically digitally tethered to the dealer, so they can be "recalled" anytime (without your approval, fwiw), but it doesn't make the word not apply.


If Tesla replaced the actual computer that runs the software that they're recalling, would you consider that a recall? What if there was no actual physical fault with the computer, but it just had firmware flashed to a chip that couldn't be reflashed?

I'm looking at a piece of mail right now that's an official recall notice for a different make/model of car I own. The issue? Improperly adjusted steering components. The company is offering to make the correct adjustments for free. Nothing is being replaced.

Whether the recall is to replace a poorly designed physical component, or to make an adjustment, or to apply a software update doesn't make a difference to regulators.

A recall is a legal process that's designed to encourage manufacturers to fix safety issues while also limiting their liability. Companies avoid recalls if they can because it's costly, time consuming, and isn't good PR. But it's worth it if the issue is bad enough that it risks a class action lawsuit, or individual lawsuits, and most desirable when someone like the US government is demanding a recall or risk legal consequences.

When a company issues a recall, they make their best effort to notify consumers of the issue, provide with clear descriptions of how consumers can have the issue fixed, and make it clear that it will be paid for by the manufacturer. In return, the manufacturer is granted legal protections that drop their risk of being sued to nearly zero.


> If Tesla replaced the actual computer that runs the software that they're recalling, would you consider that a recall?

Yeah, it requires physically taking the car to a mechanic or dealer who does this. Very different from using the software update button on the car touchscreen.

> Improperly adjusted steering components. The company is offering to make the correct adjustments for free. Nothing is being replaced.

This is clearly a recall, because it requires taking the vehicle to a mechanic or dealer.


> it requires taking the vehicle to a mechanic or dealer.

Nope.

If you insist on believing this, I'd recommend reading the document found here:

https://www.nhtsa.gov/sites/nhtsa.gov/files/documents/14218-...


How exactly are the steering adjustments done? I figure a handful of car owners might have the skills and tools to work on the steering column themselves, but that's not at all the average car owner.


Can the average car owner adhere a sticker to the underside of their hood? Or read a paragraph that corrects a mistake in the user manual? Or read a few sentences that say "it is possible that in extremely cold environments that the emergency brake release lever can require more effort to operate. This does not indicate a faulty emergency brake. Applying more force than usual will not harm the emergency brake system", followed up with diagrams describing the issue, along with toll free phone numbers offering assistance, as well as phone numbers for the NHTSA, the authority coordinating the recall. No part of the official recall notice instructs consumers to replace or even repair anything.

Again, the nature of the issue a recall addresses is wholly independent of how that issue is remedied. Why? Because a recall is a legal process that, by design, is meant to accomplish one thing: motivating a company to correct an issue that the governing authority considered important enough to correct.

If you choose to believe otherwise, I doubt it matters in the grand scheme of things.


Depends on the device. Some require connection to a host system, some can be done over the air. 100% depends on the security profile of the device in question and what the FDA allows.


I've primarily worked with Software as a Medical Device, so recalls generally involve a config tweak, upgrade, or downgrade.


This is most likely a legislative issue with NHTSA, I don't think they have a mechanism by which they can enforce a software update since the concept didn't exist when recalls were first implemented.


Why not? At the end of the day its a binary check box on the paper that you had the fix. Whether that fix was a software update or a new piece of hardware should be irrelevant.


I agree with you, I'm just responding to another commenter who questioned the use of the word "recall" in relation to software updates.


This is the correct answer. The question is, how do we get better at regulating reality faster?


This is the correct answer. The question is, how do we get better at regulating reality faster?

I don't think "Use words that put a more positive spin for Tesla PR" is something regulators should be working faster on.


Inaccuracy is inaccuracy no matter in which direction. If an outdated law made it easier for Tesla PR to spin something in a positive light, would you consider that an issue?


There's no inaccuracy here. You are arguing about an implementation detail.


Arguing about an implementation detail which creates inaccuracies, yes.


What exactly is the supposed inaccuracy?


Recalls about car software, and about updating car software, have existed since shortly after OBDII was a thing.


Electing congresspeople who actually stay on top of the expert consensus in various regulated fields is the only way the frameworks themselves can be improved.


My Honda had 2 recalls on it recently: one was a software update and the other was over the fact that a few of the cables on the audio system were slightly too short. This sounds like the same thing other than the fact that I had to pop over to the dealership to do them. Even with the cable replacement, in and out in an hour with a nice lounge to sit in.


If the software update has to be done physically at a dealer ___location, that's a recall.

The scenario in this article is an over-the-air update.


While we're talking about semantics, the auto industry seems to have a thing called "Technical service bulletin" which is a piece of actionable knowledge. If it applies to you, you act on it. You would probably have to go to the shop to get the TSB considered and applied. I don't think it has any regulatory weight, except as an input to deciding if a recall is a good idea.


I just looked it up. The relevant definition of recall is to request returning a product.

If it's possible to buy a software license online and then return it online after deciding you no longer want it (i.e., a non-physical return), then it stands to reason that Tesla can request that you return the defective software OTA and receive replacement software OTA, and that would be a recall. The fact that you are forced into returning the defective software by virtue of not having the opportunity to block the return request is a fairly minor detail.


I wasn't told about either of these recalls, though. I went in because the audio was clicking, and they told me they had 2 recalls out on my car, including for the audio issue, and that it was a quick fix.

Normally, it happens with an oil change.


It's hilarious watching Tesla owners and Elon complain about the wording when it's Tesla's fault this is a recall.

If Tesla had willingly walked this back, it'd be a software update, or a beta delay, or whatever they wanted to call it.

What laypeople don't realize is that this is being called a recall because the NHTSA pushed them into it: https://static.nhtsa.gov/odi/rcl/2023/RCLRPT-23V085-3451.PDF

-

FSD has been a cartoonish display for the better part of a year now, it wasn't until last month that the NHTSA actually pushed on them to do a recall, and from there they "disagreed with the NHTSA but submitted a recall"... which is code for "submitted the recall NHTSA forced on them to submit"

Elon knows better, but he knows he can weaponize people's lack of familiarity with this space and inspire outrage at the "big bad overreaching government"


Surely a hardware recall would specifically tell you to go to a dealer, right? I’m not sure it’s really confusing. The specific thing here is that these are mandatory things that are tracked by vehicle.


> Recall implies something being returned.

That's where the word came from, that's true. But a legal framework has built up around the concept over the decades which isn't dependent on you having to drive the car to the dealer, all of which still applies, so the word is still used.


Over-the-air software updates haven't really been a thing until Tesla (previous cars that required software updates still required visiting a dealer or service center) so what we have here is an outdated legal framework.


It's not outdated. It's simply that the word recall in the context of cars has a specific legal meaning which isn't entirely the same as the conversational english usage of the same word.

A defect that is subject to a recall, for example, is tracked as part of the car history. When considering buying a used car, you can see whether that repair has been made yet or not. The means of how that repair is delivered is inconsequential.


Recall should just mean affected vehicles must have the fix. What component of the vehicle is affected or how it is supposed to be fixed should be irrelevant, something is defective and the vehicles should probably not be used until that's sorted.


The software is being returned.

Your argument seems to be about protecting Tesla's reputation


In auto recalls, "recall" more generally implies something being repaired than returned, IME


It implies neither.

A recall is a legal process only. Whether the recall repairs, replaces, adjusts something doesn't matter. Whether a fix is applied as software, or labor, or replacement parts doesn't matter. Whether a customer needs to do something or not doesn't matter.

A recall simply says: as a manufacturer, working with government authorities, while taking specific prescribed steps to communicate and correct an issue at the cost of the manufacturer, the manufacturer is then immune from lawsuits that could arise were they to ignore the issue.


Yeah, but the reason why they used the word "recall" is that "recall" was already a word that means to officially demand that something (or someone) be returned to a previous ___location. Of course, before over the air software updates, essentially anything on an automobile that needed to be replaced/repaired/modified would need to be returned to a dealer/mechanic to do so. So now it sounds a little weird to some people to refer to an over the air software update as a recall.


Samsung just recently had a recall on some recent model top loading washing machines due to fire hazard. The fix? A software update.

https://www.cpsc.gov/Recalls/2023/Samsung-Recalls-Top-Load-W...


It's seems that it's only not a recall when Tesla does it, because Tesla invented software updates.


The software isn't returned but it is destroyed and replaced. In a world where the behavior of the things we own is driven by software, it's pretty much just the same as if you recalled and replaced faulty gas tanks.


Software recalls are nothing new. This particular use of the word “recall” refers to the legal process used by regulators to mandate a fix to a product.


Well in computing the term “Security Update” is in use, why not call this a “Safety Update”.


I agree. It’s like crying wolf, eventually you start ignoring it


> It’s like crying wolf, eventually you start ignoring it

The problem here lies in having a manufacturer shipping an unfinished product then relying on an endless stream of recalls to finish developing your vehicle.

These are supposed to be exceptional events. If they've become so frequent you're ignoring them, don't shoot the messenger.


Laypeople have an incorrect perception of what a recall actually means, especially when it comes to vehicles. The most important effect that comes along with an official vehicle recall is that the manufacturer has to fix the issue for you for free, or otherwise compensate you in some way for reduced functionality you may have paid for.


Well, that and the manufacturer has to notify owners of the recall, which is (or should be) tracked by the vehicle's VIN.

https://www.nhtsa.gov/recalls


Recalls happen in other product spaces all the time, and they often have "fixes" that say "stop using our product and throw it away". That's still a recall. The word "recall" in relation to this regulation is simply a term for "something is broken in a way that a merchant should not have sold"


We should probably go beyond the verbiage of recall but right now since it is removing a feature I think that recall is appropriate. A better verbiage might be safety reversion .


The use of the word "recall" isn't because someone just felt like using it. It's an official legal process, followed to limit the manufacturer's liability.

Whether it's a "good" word or "bad" word is irrelevant. It describes a very rigid and official legal process.


Exactly. I had a "recall" at one point where the manufacturer had a typo on a label in the engine compartment. The fix entailed receiving a new sticker in the mail and applying it over the old one. To this day, I can look up my vehicle on the NTSB site and see that that sticker was delivered to me.

If I had chosen not to actually apply it, the dealer would have been expected to do so the next time my car was in for service.


I don't think this is removing a feature, the recall notices says:

The remedy OTA software update will improve how FSD Beta negotiates certain driving maneuvers during the conditions described above, whereas a software release without the remedy does not contain the improvements.


"Recalls" almost never involve being returned. There's a recall out on my car's water pump (Thanks a lot VW) and no part of it involves sending my car back, or even interacting with VW or a dealer. It's just something I'm supposed to keep in mind over its life and various maintenance in the shop.

Other cars get "recalls" all the time that amount to updating the software in the ECU or TCU. Tesla is simply being treated like everyone else.

Hell, even in food, a "recall" usually means "throw it away" not return it.


Why? It involves a safety system! That needs to be tracked publicly and updated! Nothing more than an OTA - does not mean much when everything is fly by wire and a bug could mean your car does not stop accelerating or something.

True, but by announcing things in this fashion it is making Tesla look bad. Regulations really need to be updated so that car makers can hide this type of problem from customers as easily as possible. Especially when it comes to Tesla, regulators really need to bend over backwards to prevent articles from being written that could be interpreted in a negative way.

Or are people concerned about the word "recall" for a different reason?


> announcing things in this fashion it is making Tesla look bad.

I almost didn't catch the sarcasm of this comment, but there are other comments in this thread that are saying basically the same thing, but actually meaning it. It defies logic.

People seem to think that the government is being mean, and singling out Tesla, and being nasty using the word "recall." A recall is a legal process. The word means something very specific, and when a company issues a recall, they do so because they don't want to be sued.

It's almost like complaining about the word "divorce" or "audit" or "deposition" or other similar words that describe a legal process. The words used mean something specific. Tesla is conducting a legal process, and there's a very specific word for that process, and it means something. It's a recall.


> True, but by announcing things in this fashion it is making Tesla look bad.

They rolled out software with critical safety issues. They have to be called out.


There is no critical safety issue, the driver is always in control of the vehicle.

Are you saying it's a critical safety issue to depend on a human driver? The same as in every vehicle on the road?


Regulators dont care about the perception of a recall, they care about the safety of the consumers and more importantly the general public who have not signed up for Teslas beta program.


I don't think the other people that replied picked up on your sarcasm.


Musk stans poison discussion so thoroughly that it becomes impossible for people do differentiate between Paul Verhoeven levels of sarcasm, and the ernestly held opinions of his fan club.


It's a great example of Poe's law at work. Tesla apologists are just that absurd when it comes to holding the company to a double standard about anything that could make them look bad.


> so that car makers can hide this type of problem from customers as easily as possible

What. No! At an absolute minimum, I want to be aware of any changes to the thing in my life most likely to kill me. Maybe we could use a better term like "Software Fuckup Remediation" or "Holy Shit How Did We Not Get The Brakes Right".


If there were a word for "we would do anything we could possibly do, legally or otherwise, and more if we knew how, to avoid having to spend a second or a penny trying to fix the thing we knew was broken when we sold it to you, but the government is looking at us funny and we might get sued if we don't, so we'll grudgingly do it,", it'd probably be "recall."


> it is making Tesla look bad

Maybe Tesla should stop doing things that result in it receiving poor publicity? just a thought


Tesla should continue doing what's best to accomplish the company mission and making vehicles safer by improving automation.

Why should a companies actions be dictated by PR and media news cycles?


Musk was complaining, not me!


Because nothing is being "recalled".

It should absolutely be tracked and publicized. But it's fundamentally different than "this car is fundamentally broken and you have to take it back to the manufacturer"


> Because nothing is being "recalled".

"Recall" with cars is a legal term that means something specific.

> But it's fundamentally different than "this car is fundamentally broken and you have to take it back to the manufacturer"

That's not a criteria for a recall. A recall can be for a small thing completely unrelated to driving safety. Someone else mentioned having a recall to replace a sticker. My most recent recall was to replace the trunk lifts.

"Recall" for a car just means a specific procedure now has to be followed, tracked and reported. It says nothing about the safety criticality of the fix.


You do not call security updates that you get to your computer a recall of your computer, do you?


If I don't apply the update, would my computer potentially kill me or cause me great bodily harm? If yes then I think it should be called a recall.


To me "recall" clearly implies that I have to drive it to the dealership and let them fix or install something. "Mandatory software update" might be a better term.


The suggestion wasn't that it should not be tracked, just that it shouldn't be called a recall, since they're not actually recalling your car to have something fixed.


I’m leasing a Honda E and software wise literally nothing except apple car play works/ is usable.

Some things are absolutely safety relevant. But no one cares.


Should be a CVE.


One can kill people the other not. Guess which one.

>> "...The FSD Beta system may cause crashes by allowing the affected vehicles to: “Act unsafe around intersections, such as traveling straight through an intersection while in a turn-only lane, entering a stop sign-controlled intersection without coming to a complete stop, or proceeding into an intersection during a steady yellow traffic signal without due caution,” according to the notice on the website of the National Highway Traffic Safety Administration..."


> One can kill people the other not.

No, that's not correct. Whether it can kill people or not is orthogonal to whether it's a true physical recall of the car or a software update.


I think that it’s more about using the most appropriate known terminology in order to try to get the most people to do the needful. “recall” sounds more urgent/dire than “software update”, and will likely encourage many more people to take action vs using “software update” or some less familiar terminology. The word “recall” in terms of autos has built up a lot of history/prior art in people’s minds as something to really pay attention to. I have no idea, but perhaps that is why they are going with this known terminology.


The whole point of over-the-air updates is that the owner doesn't need to do anything. For example, both Tesla and Toyota have had bugs in their ABS software that required recalls. The owners of the Toyotas had to physically bring their cars in to get the software update which slows down the adoption drastically. The Teslas received the update automatically and asked for the best time to install the update the next time the owner got in the car.

There are really two issues here. The FSD and the OTA updates. Let's not throw out the baby with the bathwater and blame OTA updates just because Tesla's FSD software is bad. The OTA updates do provide an avenue to make cars much safer by reducing the friction for these type of safety fixes.


> The OTA updates do provide an avenue to make cars much safer by reducing the friction for these type of safety fixes.

True, but let us also acknowledge the immense systems safety downsides of OTA updates given the lack of effective automotive regulation in the US (and to varying degrees globally).

OTA updates can also be utilized to hide safety-critical system defects that did exist on a fleet for a time.

Also, the availability of OTA update machinery might cause internal validation processes to be watered down (for cost and time-to-market reasons) because there is an understanding that defects can always be fixed relatively seamlessly after the vehicle has been delivered.

These are serious issues and are entirely flying under the radar.

And this is why US automotive regulators need to start robustly scrutinizing internal processes at automakers, instead of arbitrary endpoints.

The US automotive regulatory system largely revolves around an "Honor Code" with automakers - and that is clearly problematic when dealing with opaque, "software-defined" vehicles that leave no physical evidence of a prior defect that may have caused death or injury in some impacted vehicles before an OTA update was pushed to the fleet.

EDIT: Fixed some minor spelling/word selection errors.


This is a totally fair response since I didn't say that directly in my comment, but I 100% agree. OTA updates are a valuable safety tool. They also have a chance to be abused. We can rein them in through regulation without getting rid of them entirely because they do have the potential to save a lot of lives.


I agree.


It'd probably be just as effective to require that every version of the car software that is made available to the fleet must also be provided to the NHTSA. There's no sweeping shoddy versions under the carpet then.


> The word “recall” in terms of autos has built up a lot of history/prior art in people’s minds as something to really pay attention to

Tesla didn't choose the word "recall." The legal process known as "recall" chose the word. It's not like people at Tesla debated over whether or not to call it a "recall" instead of a "software update."

If Tesla had it their way, they'd have quietly slipped it into any other regular software update alongside updates to the stupid farting app, if they cared to fix it at all.

When a company issues a recall, it's because there's pressure from regulators, or investors, or both, and/or a risk of class action lawsuits and fines. Using the word "recall" isn't a preference or even a synonym. It's a legal move meant to protect them.

If Tesla gets sued over a flaw, "we issued a software update" isn't legally defensible. "We cooperated with official government bodies to conduct a recall," does because a recall describes an official process that requires manufacturers do very specific things in specific ways as prescribed by law. In exchange, manufacturers are legally protected (usually) from lawsuits related to that flaw.


It is the terminology that exists in US automotive regulations (what little there effectively are).

A "recall" is just a public record that a safety-related defect existed, the products impacted and what the manufacturer performed in terms of a corrective action.

Additionally, I believe that the possibility exists that Tesla must update the vehicle software at a service center due to configuration issues. Only a small number of vehicles may require that type of corrective action, but the possibility exists.

Historically, there exist product recalls (especially outside of the automotive ___domain) where the product in question does not have to be returned (replacement parts are shipped to the impacted customers, for example).


No, really, this is true. It has nothing to do with how the defect is fixed.

https://www.nhtsa.gov/sites/nhtsa.gov/files/documents/14218-...

https://www.law.cornell.edu/cfr/text/49/573.6

(Except for tires.)


Hmm. Perhaps I should have read the parent's comment more carefully. I think that I might have misinterpreted it.

You (and the parent comment) are correct.

My comment was not intended to argue that a recall prescribed a particular corrective action.


> "true physical recall"

ah, a made up term in order to justify your point. how convenient.


No we're talking about making terms actually fit their definition, which is generally helpful.


If it actually fit the definition then why would you need to add "true" and "physical"?


> whether it's a true physical recall

I hope by participating in this thread you're aware by now but just to be clear there is no "physical" recall necessary. The recall is about documentation, customer awareness, and fixing the problem. "Physical recall" is meaningless and unimportant, it's not what "recall" means at all.


And I hope you see that you've demonstrated why "recall" is a poorly chosen word for that, since the word's normal definitions have nothing to do with "documentation," "customer awareness" or "fixing the problem."


I think it’s closer to the physical product recall: it’s a strong “everyone with our product needs to get it fixed” message which they’re doing to avoid liability and further damage to their reputation.


You're right. A physical recall doesn't necessarily imply death.

This should be labeled "holy shit need to fix this now, people could die"


The majority of "actual recalls" is you taking your car to the dealership and them plugging into the diagnostic port and running one line of code. So this one is the same, just that Tesla is able to do it over the air.


>just that Tesla is able to do it over the air.

...and that's one reason why I would never purchase a Tesla.


You’d prefer to make an appointment at a dealership and have to physically drive it in?


"Full self driving beta" is one of my reasons.


Don't forget how many recalls are "Next time you replace this part, it will be replaced with a new version that doesn't have the defect" or how many recalls are "A tech will look at the part and then do nothing because your part is fine" or "A tech will weld the part that is currently on your truck" or "Be aware this part may fail ahead of schedule and if it does it will suck, but you don't technically have to replace it right now so we don't have to cover the cost"


My all time favourite[0]

I recall getting a recall notice from GM that included "until repaired, remove key from keychain".

[0] https://en.wikipedia.org/wiki/General_Motors_ignition_switch...


"Recall" means that the Manufacturer MUST follow rules related to record keeping and customer engagement. If you have a recall, please make sure you get it completed. It is someone's job to call and write to you until you do.


Finally the correct answer in a wave of "well ackusally" comments.

A recall means exactly what it means. The manufacturer is responsible for a fix. If for whatever reason they can't push an OTA update to your car, Tesla is still responsible for sending you a postcard in the mail and calling you every 6 months telling you to bring it in for service until they have reasonable evidence the car is no longer on the road.


It's nice that they can quickly fix it without people needing to drive to a service center, but you can understand that people would be concerned by the "may cause crashes" part?


It's a bit of a semantic play here. But there's a difference between it might happen and things actually happening. Tesla has had several safety related "recalls" in the last few years. All of which were fixed without much hassle via an over the air software update. And of course their record on safety kind of speaks for itself. Not a whole lot of bad stuff happening with Teslas relative to other vehicles that are facing issues related to structural integrity of the car. Like wheels might fall off with some Toyota's. Or spontaneous combustion of batteries because of dodgy suppliers (happened to BMW and a few others). Which is of course much harder to fix with a software update and would require an actual recall to get expensive repairs done.

The headline of that many cars being "recalled" is of course nice clickbait. Much better than "cars to receive minor software update that fixes an issue that isn't actually that much of an issue in the real world so far". One is outrage-ism fueled advertising and the other is a bit of a non event. It's like your laptop receiving some security related update. Happens a lot. Is that a recall of your laptop or just an annoying unscheduled coffee break?


Was the recall a voluntary recall by the company or something the company was told to do by a regulator? To me, recall means much more than just having to have the company replace something. It means they have to do it at their expense. So in this case, it's not as bad for Tesla's bottom line if it is just an OTA update. A recall is something that the car industry is used to doing whenever they have to fix a mistake. I would not be surprised if the industry doesn't have ways of writing those expenses off in taxes or something, so need to be able to specifically itemize the recall work.


> Was the recall a voluntary recall by the company or something the company was told to do by a regulator?

"Voluntary recall" in this case means that Tesla did not choose to take the hard route where there's a court order for a mandatory recall. Few manufacturers fight that, because customers then get letters from the Government telling them their product is defective and that it should be returned for repair or replacement.

Somebody in the swallowable magnet toy business fought this all the way years ago.[1] They lost. It's still a problem.

[1] https://www.cpsc.gov/Safety-Education/Safety-Education-Cente...


Nearly all recalls are voluntary. But the NHTSA advises the company that if they don't do a voluntary recall, the NHTSA will do a compulsory recall.

A voluntary recall is easier and cheaper for all involved.


Almost all car recalls are voluntary because once it becomes a mandatory recall the government can require them to provide a buyback option to the vehicle owner.


oh, that would be amazeballs to have Tesla reimburse the cost of the FSD purchase. i never drank the Tesla kool-aid, so it would be good karma to see them get dinged for the snake-oil they've sold as FSD


Oh no, it wouldn't be a FSD buyback, it would be a car buyback at damn near purchase price... which is why manufactures avoid it at all costs.

§ 30120(a)(1)(A)(iii) by refunding the purchase price, less a reasonable allowance for depreciation.


I owned three Toyotas for last 7 years. I had multiple recalls but except one of them every other one was software update.

OTA updates are cool but they more complexity to the car. I like my cars to be simple and reliable instead.


I agree with you and I think the responses you are getting aren't helpful. Sure this is technically a recall but:

Customers aren't dealing with inconvenience of having to bring their vehicle in / not having it for a while.

And Tesla is not incurring the cost of physically handling and fixing 360k cars.

So while it's technically a recall, the impact on the consumer and manufacturer is very different than what the word brings to mind.


Why? It still has to be tracked and follows the same NHTSB regulations.

https://www.nhtsa.gov/sites/nhtsa.gov/files/documents/14218-...


It's funny that it's a bit like "autopilot". The word fits exactly here but laic public doesn't usually understand it to mean this.


My Honda has had recalls where I had to bring it in for a software update. The only difference here is that Tesla has infrastructure to do that remotely.


I think this is splitting hairs. Would it still be a recall if a mechanic drives to your house to fix a mechanical problem?


Eh. People should be informed of dangerous behaviors of public products regardless of how they're fixed.


Oh damn, that title is definitely a lie then


This. I read the headline and thought it meant all those cars had to go to the dealer


I used to be on the other side of this, but now I agree with you.

The official meaning of a recall is providing a record of a defect, informing the public that the product is defective, and making the manufacturer financially liable for either remediating the defect or providing a refund. However, the colloquial definition of a “recall” now means a product must be physically returned.

To better represent the nature of a “recall” they should instead call it something like “notice of defect”. In the case of safety critical problems like here they should use a term like “notice of life-endangering defect” to properly inform the consumers that the defect is actively harmful instead of merely being a failure to perform as advertised.

tl;dr They should change the terminology from “recall” to “Notice of Life-Endangering Defect”


I wish all the ink spilled on controversy over Tesla would instead be spilled on doing basic safety improvement to the road system that are cheap and proven to save many lives.

Lets be real here, automated driving or not, having actually save roads helps prevent death and harm in many cases.

The hyper focus on high tech software by all the agencies engaged in 'automotive security' is totally wrongly focused. What they should actually do is point out how insanely unsafe and broken the infrastructure is, specially for people outside of cars.

See: "Confessions of a Recovering Engineer: Transportation for a Strong Town"


If we're actually serious about this, we should get rid of roads altogether and replace them with rail tracks. That solves 95% of the automation problem anyway.


Yes I am absolutly for that in a lot of cases. But lets be real, you are not gone end cars anytime soon. Lots of cars exist, more will exits.

Even with the largest possible investment in rail, cars will exists in large numbers.

So yeah, rail and cargo tram ways in cities are great. But we can't just leave car infrastructure unchanged.

Specially because existing car infrastructure is already there and cheap to modify. Changing a 6 lane road into a 3 lane road with large space for bikes and people is pretty easy.


Replacing roads with railways means nearly complete reliance on a central authority for all transportation needs. This authority decides who gets to leave town and who doesn’t, which companies have priority when delivering goods, how or if the parallel railways should scale to meet more transportation demand, etc.

Transportation safety is important but it shouldn’t be considered in isolation when there are potentially catastrophic consequences to prioritizing safety at the cost of everything else.


with roads, you already have a central authority (the DMV) that decides who can drive and who can't. Try taking your home-built motorbike with no licence plate out for a spin on public roads, you'll likely get pulled over by police.


Why can't personal vehicles use rail tracks without permission from a central authority?


Trains and trams should be overall the present, past and future of affordable, green, cheap transport.


Don’t forget bikes!


You're living in a fantasy. Name one country that has done that.


Name one person who invented airplanes before airplanes were invented.


Spain, specifically Barcelona. An absolute joy to ride, I may add.


I have been to Barcelona and don't once recall seeing someone who owned a train. Having public transit is vastly different than a shared rail system.


Tokyo is intensely car hostile and it’s amazing


>If we're actually serious about this,

They are not serious. The rails are a great idea, along with drive-able 'rail' cars for individuals.


Not even rails, could be magnetic, and make cars like maglev trains. Wear and tear would be negligible too, since there's no friction.


That sounds economically realistic.


Lol


US passenger rail is unsafe. We have no way near the level of sophistication of European passenger rail transport.

We have a significantly higher number of derailments. Even the worst European rail is more safe than US rail.


In the US trains have 17 times fewer deaths per passenger-mile than cars, and even then less than 1% of deaths from trains are passengers (the overwhelming majority are trespassers).

That it could be even better does not mean it is not a substantial improvement.


>US passenger rail is unsafe.

US passenger automobiles are more unsafe.


Just let Japan build US rail, and copy paste Dutch city planning, guarantee it’ll be much better and cheaper than anything anyone in US government would come up with and requires 0 brain power.


Europe has more railroad deaths than the US. Try again.


Do you think there are any statistical issues with simply reporting the number of deaths?


Adjust that for person-kilometers traveled and try again.


I disagree. If someone is a bad driver, and causing crashes, we don't say "we need to improve the road system". We suspend the drivers license until they can prove they are capable of being a safe driver. We should hold this software to the same standard. Until it can demonstrate safety at or above human level, it should be outlawed.

Road systems should always be worked on, but when a crash happens its usually the drivers fault, except in a minority of cases where bad road engineering is to blame. this self driving is fucking up enough that it cannot be blamed on the roads anymore, if it ever could.


> I disagree. If someone is a bad driver, and causing crashes, we don't say "we need to improve the road system".

Well yes, and that is literally exactly the problem. That is exactly why its so unsafe in the US. Because instead of building a safe system everything is blamed on people.

In countries that take road safety seriously, every crash is analyzed and often the road system is changed to make sure it does not happen again. That is why places like Finland, Netherlands and so on have been consistently improving in terms of death and harm caused by the road system.

Again, the book I linked goes into a lot of detail about road safety engineering.

> We suspend the drivers license until they can prove they are capable of being a safe driver.

An unsafe designed street often leads to situation where even good drivers intuitively do the wrong thing. Again, this is exactly the problem.

If you build a system where lots of avg. drivers make accidents, then you have a lot of accidents.

> We should hold this software to the same standard. Until it can demonstrate safety at or above human level, it should be outlawed.

Yes, but its a question of how much limited resources should be invested in analyzing and validating each piece of software by each manufacturer. In general software like Tesla AP would likely pass this test.

I am not against such tests but the reality is that resources are limited.

> Road systems should always be worked on, but when a crash happens its usually the drivers fault, except in a minority of cases where bad road engineering is to blame.

I strongly disagree with this statement. Its a totally false analysis. If a system is designed in a way known to be non-intuitive and leading to a very high rate accidents then its a bad system. Just calling everybody who makes a mistake a bad drive is a terrible, terrible approach to safety.

Once you have a safe road system, if somebody is an extremely bad driver, yes taking that person of the road is good. However in a country where so much of the population depends on a car, that punishment can literally destroy a whole family. So just applying it to anybody who makes a mistake isn't viable, specially in system that makes it incredibly easy to make mistakes.

The numbers don't even show the problem, the unsafe road system leads to less people walking in the US, and somehow still creating a high rate of deaths for people who walk.


America hates any solution to a problem that doesn't involve training individuals to act differently. I constantly hear people who want warning labels gone and periodic PSAs against mixing bleach and ammonia removed, people who don't want mandatory baby left in a hot car detectors, people who hate OSHA, etc.

We think the solution should always be human. That the way to solve climate change is homesteads, the solution to poverty is individuals working harder, etc.

When anything we don't want happens, we think "I'll try harder next time" not "We will eliminate this as a possibility".

People treat life as a sport rather than an engineering challenge, they want to "win fairly", not make it impossible to lose, they want everything they do to say something about their own ability rather than say something about a clever process that makes individual skill irrelevant.

I really don't exactly like that aspect of humanity, constantly making everything into a sport.


Positive sum games >>> zero sum games. The solipsistic individualistic mindset stems from the zero-sum game system IMO and that's a larger social and biological issue.


It's bigger than zero sum. The people I'm talking about have amazing amounts of altruism and community ethics.

They truly seem to want the whole world to succeed together, just... excluding the people who want to put you in jail for sneaking lead fuel in your truck, and anyone who doesn't lift weights and eat lots of meat, or anyone who complains about them going out while having a cough spreading germs.


I'd recommend reading the book "There Are No Accidents", if you haven't already. Really goes into detail on this.


There's a book called "There Are No Accidents" that goes into depth discussing this fallacy that you should read.

https://nyc.streetsblog.org/2022/02/15/excerpt-there-are-no-...


apparently youve never heard the phrase "texting while driving" or "drunk driving"


Its just how incredibly wrong you are.

Go look at Salt Lake City. Its is true, Salt Lake City despite horrible bad road design, like Huston. But because people drink less they have slightly less accidents. However international they are still terrible. And that is with there being very little walking.

If they had pedestrian and cyclist numbers like Amsterdam it would be 24/7 mass murder.


Tesla publishes their safety data quarterly no need for your many assumptions and speculations - Teslas are already much safer than the average driver, especially when autopilot is on

https://www.tesla.com/VehicleSafetyReport


One interesting bit from a different 3rd party study:

> Cambridge Mobile Telematics also found that people driving Teslas were 21% less likely to engage in distracted driving with their phone in their Tesla compared to when they drove their other car.

Maybe the software integration helps avoid this? Lots of other cars have much more complicated interfaces to hook up calls and reading texts. My mom struggled to figure out her car even supported Android Auto.

It might just be it's a higher end car, but they didn't see it for an EV Porsche

> These findings include an analysis of Tesla drivers who also operate another vehicle. These drivers are nearly 50% less likely to crash while driving their Tesla than any other vehicle they operate. We conducted the same analysis on individuals who operate a Porsche and another vehicle. In this case, we observed the opposite effect. Porsche drivers are 55% more likely to crash while driving their Porsche compared to their other vehicle.

The reduction in speed is likely influenced by automated driving, especially considering how fast a Tesla car can accelerate vs normal cars:

> They were 9% less likely to drive above the speed limit.

https://electrek.co/2022/05/27/tesla-owners-less-likely-cras...


> Maybe the software integration helps avoid this?

With auto pilot on the car is watching you. If you take eyes off the road it issues more pay attention nags. Failure to comply removes FSD Beta. So you have a feedback loop where paying attention becomes more important then your phone.


Autopilot is not FSD, this report is clearly about Autopilot, they haven't mentioned anything about FSD in it.


FSD is an extension of the autopilot capability and the autopilot capability has proven extremely safe. As soon as there is statistically enough data they will release FSD numbers too but that means tens of millions of miles logged on roads since crashes are generally uncommon


Is autopilot available for all roads? If not, that’s a significant and unstated statistical bias in this data. To have a true comparison, you will need to include only human accidents on roads where Teslas can use autopilot.


At an individual level that makes sense. But at a city level that doesn’t work. People will still need to get around.


Something is changing and it’s very broken so people are paying attention to it.

If people really wanted to fix transportation that’s great, high speed rail and public transportation reducing the number of cars on the road seem to be the best solution.

But hey, Elon’s hyper loop was a publicity stunt to discourage investment in that. So I say, whether you want to shit on Tesla or public roads, shit on Elon.


> reducing the number of cars on the road seem to be the best solution.

No actually its actually not. Less concession in a system that depends on concession for safety will lead to more accident not less.

That is what was shown during Covid, less driving, but accidents per mile went up.

So yes, of course public transport, bikes are great, but if you don't fix the underlying problem in the road system, you are gone have a whole lot of accidents.

> But hey, Elon’s hyper loop was a publicity stunt to discourage investment in that.

This is a claim some guy has made, not the truth. What is more likely is that Musk actually thinks Hyperloop is great (its his idea after all) and would have wanted investment in it.

> shit on Elon

I prefer not to shit on people most of the time.

Musk is the outcome of a South Africa/American way of thought that is more in line with the US avg then most people who advocate for public transport. That is the sad reality.

And the problem in the US road system or the US bad public transport can 100% not be blamed on him. There are many people with far more responsibility that deserve to be shit on far more.


accidents per mile driven isn't a good metric. If you half the number of miles driven (through public transit/walking/biking) and have 50% more accidents per mile driven then you've still saved a bunch of people's lives.


Yes, and miles driven are not fungible either. Accidents are more likely on local streets.


What is the underlying problem? I keep looking for it in your comments


Not GP, but the underlying problem to me is repeat zero-sum interactions in a crony capitalism based system. All the shit stems from the winners that emerge from these repeated games making short term decisions to either allow them to exit or play again.


I spent 6 months driving a 2019 Hyundai with lane assist and radar cruise control. Personally it's almost perfect. If you added some smart road features - to improve lane assist and better sign readability (to drop the speed as I enter town or curve and increase when I leave one). I don't need full self driving, just an improvement on speed control and some lane assist.

Would smart roads be expensive? RFID responders seem super cheap compared to how much actual asphalt costs. Authorities are currently unable to remotely control flows, speeds and safety which is completely bonkers.


> Would smart roads be expensive? RFID responders seem super cheap compared to how much actual asphalt costs. Authorities are currently unable to remotely control flows, speeds and safety which is completely bonkers.

Yeah its not actually that easy. Go and look into train signaling. And cars are not even able to do coupling.

Making a train system operate like a super-railway with cars is crazy difficult and has never been done before.


But is it more crazy difficult than making a car that knows how to safely drive on roads designed specifically for humans?

That said, I think even just basic machine-readable road metadata like signs, speed limits, lanes etc would improve matters.


> That said, I think even just basic machine-readable road metadata like signs, speed limits, lanes etc would improve matters.

That assumes that sign reading is the primary issue with AI cars and it isn't.

If you are willing to go to that expense and infrastructure investment, why not just build thing like trains (trams, subways, S-Bahn) and things like Trolley buses and so on.

They are far more space efficient, you get much higher threwput then your untested fancy road infrastructure.

Another issue with your solution is that there are 1 billion cars out there that do not have any of these things so after decades of working on it, you will still have decades where most cars don't use any of this stuff.


I'm not saying that it is a primary issue, but all of these issues come up with Teslas occasionally - just read some of the comments here.

More importantly, even if this kind of stuff doesn't enable full self-driving, it can still be used to make basic driving assistance much better - think of even the most basic cars having lane assist on most road, for example.

I don't think it's a significant expense, either. I'm not talking about ripping up roads and replacing all signs in one fell swoop. But e.g. when they redid the highway near me recently, it got little reflectors to mark the lanes. What if, say, every Nth of those had some kind of RFID thing in it? Or when they replace signs, why not put up a new one that broadcasts what it is? None of this is particularly expensive.

As to why not trains and buses - because they do not actually offer the same features as cars, and many people want those features. If you want to argue against cars on principle, it's a different conversation entirely. I do think that more quality public transport is desperately needed, but, speaking as someone who used it nearly exclusively for the first 25 years of my life (and in places that are designed around it, unlike US), it does not replace a car.


You seem to contradict yourself - train signalling is hard, but lets build trains..?

My point the road metadata isn't for collision avoidance, but for better lane and speed assist.


We’re not talking about road safety. Road safety relies on a baseline amount of driving safety (car hardware + car software). If a car doesn’t stop at a stop sign, no amount of “safe roads” prevent it from mowing down a cyclist.

Saying this is like saying we should ignore sex offenders in favor of reworking our society. Instead, we should solve both problems rather than bicker over priorities.


Ok we should be solving both problems, I agree, but the reality is agencies and political processes have limited capacity. And how much society and influential people in that society is consumed with one thing, the less they do about the other.

And I would say that I'm not proposing to rework society in sociological sense, but rather to throw out the standard engineering standards and replace them with better standards.

And actually it does matter even in the case you suggest. If a car rolls into an intersection, what speed that car is matters. It matters from what points it is clear that the car is out of control. It matters if there are speed bumps or something along those lines that can send strong signals to a driver.

If you have all raised intersection then the top speed of cars will simply be lower and if somebody human or AI makes a mistake that lead to a crash, that crash will be at far lower speed.

And proper road design also leads to less intelligence and fancy car design being required. I rather get hit by a shitty designed unsafe old car at 20mph then a fancy new car at 30mph.


> The FSD Beta system may cause crashes by allowing the affected vehicles to: “act unsafe around intersections, such as traveling straight through an intersection while in a turn-only lane, entering a stop sign-controlled intersection without coming to a complete stop, or proceeding into an intersection during a steady yellow traffic signal without due caution,” according to the notice on the website of the National Highway Traffic Safety Administration.

Does anyone have insights on what QA looks like at Tesla for FSD work? Because all of these seem table-stakes before even thinking about releasing the BETA FSD.


> Does anyone have insights on what QA looks like at Tesla for FSD work? Because all of these seem table-stakes before even thinking about releasing the BETA FSD.

Tesla is not exactly in love with QA. Especially for FSD.

FSD is mainly 2 things: 1. (By far most important) shareholder value creating promise, that's been solved for 6 years according to their CEO. 2. Software engineering research project

What FSD is not is a safety critical systems (which it should be). They focus on cool ML stuff and getting features, with any disregard for how to design, build and test safety critical systems. Validation and QA is basically non-existent.


Do you have actual knowlage of Tesla internal QA processes, any kind of source at all?

Based on there presentation, they for sure have a whole load of tests, many built directly from real world situation that the car has to handle. They simulate sensor input based on the simulation and check the car does the right thing.

They very likely have some internal test drivers and before the software goes public it goes to the cars of the engineers.

Those are just some of things we know about.

I have no source on their approach to testing safety critical systems, but we do know that they have a lot of software that has based all test by all the major governments. They are one of the few (or only) car maker fully compliant to a number of standards on automated breaking in the US. We have many real world example of videos where other cars would have killed somebody and the Tesla stopped based on image recognition.

So they do clearly have some idea of how to do this stuff.

So when making these claims I would like to know what they are based on. It might very well be true that their processes are insufficient but I would actual know some real data. Part of what a government could do, is forcing car maker to open their QA processes.

Or the government could (should) have its own open test suit that a car needs to be able to handle, but clearly we are not there yet.


2 sources.

1. I know people working at Tesla.

2. Much more important one - Elon's Twitter feed. They're doing last minute changes, and once it compiles and passes some automated tests, it's tested internally only over few days before it's released to the customers. Even if they had world class internal testing (they don't), for something having to work in such a diverse environment like self driving system without any geo-fencing, those timelines are all you need to know.


Some manufacturers hold off on newer, untested tech for years before adding that to their vehicles. This is what happens when safely is a priority.

That's why I bought/will keep buying Toyota/Lexus.


Euro NCAP etc. seem to classify Teslas as (some of) the safest vehicles on roads.

https://www.euroncap.com/en/results/tesla/model+y/46618

Same for NHTSA:

https://www.nhtsa.gov/vehicle/2022/TESLA/MODEL%2525203/4%252...


Because they have a lower center of gravity and good crash structure. These are good things. But avoiding the accident in the first place significantly reduces the need to test that crashworthiness.


Tesla was the top one also in avoiding accidents. See Euro NCAP test.


That's really not the point.

Because of the FSD false promises, Tesla encourages dangerous behavior from drivers.

I don't want to be next to a Tesla driving in autonomous mode while it's driver at the wheel is not paying attention to me.


I strongly feel people ought to have these discussions while consistently citing actual data sources relevant to the discussion.

For example, did you predict, based on the speculation of Tesla being incompetent with regard to safety, that they have the lowest probability of injury scores of any car manufacturer? Because they do.

Did you predict, based on speculation about Elon Musk's incompetence in predicting that self-driving would happen, that there are millions of self-driving miles each quarter? Because there are.

Did you predict, based on speculation about Tesla incompetence in full self-driving, that the probability of accident per mile is lower rather than higher in cars that have self-driving capabilities? Because they do.

I know this sort of view is very controversial on Hacker News, but I still think it is worth stating, because I think people are actually advocating for policies which kill people because they don't actually know the data disagrees with their assumptions.

https://www.tesla.com/VehicleSafetyReport


Unaudited (internal Tesla data), cherry-picked (comparing with average cars in USA, which are 12 years old beaters, to their very young fleet of expensive cars) data, that doesn't correct for any bias (highway driving vs non-highway driving being one of the many issues) is not exactly the magic bullet you think it is.

Also, none of that is self driving. This data talks about AP, not FSD. FSD is also not self driving by any means (it's level 2 driver assist), but that's a detail at this point.


I didn't say it was a magic bullet. So you are hallucinating thoughts about me, not responding to what I said. Being critical of the data like you are being is good thinking in my opinion. I just don't like how often people don't have beliefs that are anywhere close to the data.

For example, elsewhere in this comment thread, someone threw out a random statistic of 400:1 as part of their argument, but this seems to me to be something like six orders of magnitude diverged from a data informed estimate.

To try and contextualize how big an error that is - it is like thinking that a house in the Bay Area has the same cost as a soft drink.

I think if we have to cite our data we are less likely to do that sort of error and more likely to catch it when it is done.

I definitely don't think FSD is magically safe. So if you think that is what I'm trying to say, please update your beliefs according to my correction that I do not believe this. I think anyone driving in FSD should remain vigilant, because it can make worse decisions than a human would.


A system that protects 400 people but kills 1 is not a system that I want on public roads because I don't want to be in the 1 - Elon and the children of Elon are basically making the assumption that everyone is okay with this.

The probability of an accident for any driver assistance system will ALWAYS be lower than a human driver - but that doesn't mean the system is safe for use with the general public!

People like me are not advocating for "killing people" because we aren't looking at data - it's that no company has the right to make these tradeoffs without the permission and consent of the public.

Also if this was about safety and not just a bunch of dudes who think they are cool because their Tesla can kinda drive itself, why does "FSD" cost $16,000?


> People like me are not advocating for "killing people"

If you are advocating against a system that protects 400 people and kills one, you are advocating for killing people.


> A system that protects 400 people but kills 1 is not a system that I want on public roads because I don't want to be in the 1 - Elon and the children of Elon are basically making the assumption that everyone is okay with this. > > The probability of an accident for any driver assistance system will ALWAYS be lower than a human driver - but that doesn't mean the system is safe for use with the general public!

Totally we should be wary of a system that protects 400 and kills 1. Thank you for providing the numbers. It helps me show my point more clearly.

If you are driving on a road you encounter cars. Each car is a potential accident risk. You probably encounter a few hundred cars after ten or so miles. Not every car crash kills, but lets just assume they all do to make this simpler. For the stat you propose, you are talking about feeling uncomfortable with an accident per mile of something around the ballpark of ten miles.

Now lets look at the data. The data suggests the actual miles per accident is closer to 6,000,000 miles per accident. This is six orders of magnitude diverged from the number of miles per accident that you imply would make you feel uncomfortable.

Lets try shifting that around to a context people are more familiar with: a one dollar purchase would be a soft drink and a six million dollar purchase would be something like buying a house in the bay area. This is a pretty big difference I think. I feel very differently about buying a soft drink versus buying a house in the Bay Area. If someone told me they felt that buying a house was cheap, then gave a proposed price for the house that was more comparable to the cost of buying a soft drink, I might suspect they should check the dataset to get a better estimate of the housing prices, because it might give them a more reasonable estimate.

So I very strongly feel we should cite the numbers we use. For example, I feel like you should really try and back up the use of the 400 to 1 number so I understand why you feel that is a reasonable number, because I do not feel that it is a reasonable number.

> Also if this was about safety and not just a bunch of dudes who think they are cool because their Tesla can kinda drive itself, why does "FSD" cost $16,000?

Uh, we are a on venture capitalist adjacent forum. You obviously know. But... well, the price of FSD is tuned to ensure the company is profitable despite the expense of creating it as is common in capitalist economies with healthy companies seeking to make a profit in exchange for providing value. It is actually pretty common for high effort value creation, like creation of a self-driving car or the performance of surgery, for the prices to be higher.


Interesting graph, I like that it's broken out into quarters. But,

1) those are statistics for the old version, the new version might be completely different. I've had enough one-line fixes break entire features I was not aware of that my view is that any change invalidates all the tests. (Including the tests that Tesla should have but doesn't) Now probably a given update does not cause changes outside its local area, but I can't rely on that until it's been tested.

2) the self-driving is presumably preferentially enabled for highway driving, which I assume has fewer accidents per mile than city driving, so comparing FSD miles to all miles is probably not statistically valid.


I agree with you. I would really like to see datasets that reflect how things actually are. I think it would be really dangerous to jump to FSD being safe on the basis of the data I shared. However I would hope that whatever opinions people shared were congruent with the observed data. I don't feel like the prediction that Elon Musk and Tesla not caring about safety is congruent with the observed data, which shows the autopilot has improved safety, best explains the observations of improved safety.

Just for context - I've been in a self-driving vehicle. Anecdotally, someone slammed on the breaks. The car stopped for me, but I was shocked: for hours before this the traffic hadn't changed, it was a cross country trip. I think I would have probably gotten in an accident there. Also anecdotally, there are times where I felt the car was not driving properly. So I took over. I think it could have gotten into an accident. Basically, for me, the best explanation I have for the data I've seen right now is that human + self-driving is currently better than human and currently better than self-driving. The interesting thing about this explanation is how well it tracks with other times where we've technology like this before. In chess playing for example, there was a period before complete AI supremacy (which is what we have now) where human + AI was better than AI.

I like the idea of being safe, so if the evidence goes the other way, advocating for only humans or only AI doing the driving, I want to follow that evidence. Right now I think it shows the mixed strategy is best and that is kind of nice to me because it implies that the policy that best collects data to reduce future accidents through learning happens to be the policy that is currently being used. I like that.


As any Tesla supporter will tell you, Autopilot != FSD.

(Is Autopilot still limited to divided, limited access highways? Those are significantly safer than other roadways.)


> Is Autopilot still limited to divided, limited access highways

No. Was it ever? All you need is a piece of road that has something which appears to be lane lines. The road to my house is usable despite having no actual paint striping because it happens to have a crack that runs fairly straight up one side and was filled with tar. So the camera thinks it's a lane line. Ta-da!


This report is for Autopilot, not FSD which everyone else is talking about on HN.


Good point.

The thing is we often have discussions about this stuff and I'm trying to advocate for citing datasets to more tightly correlate our words with the evidence that our words correspond to. I'm not trying to say this version shouldn't have been recalled for example, but that I think we should be close to evidence.

In the case of auto-pilot, it was the case that people made the same arguments that are now being made against FSD. I think that makes it somewhat relevant to the discussion, because people previously also made the same claims about safety, but now that we have the data, we can see those claims were wrong. I believe these sort of generalizations, though inaccurate, can help us to make more informed decisions, but I'm not really confident in any beliefs that are made at this greater decision from direct data.

So I think anyone who can provide datasets which correspond with FSD performance rather than autopilot performance ought to do so. That would be really great data to reflect on.

The thing I'm worried about is that no data at all is backing the conjectures - which, given that I sometimes see estimates that I calculate to be many orders of magnitude away from data informed estimates - seems to be the case on Hacker News at least some of the time.


Please ignore all the times I'm wrong in favor of all the times I'm right!


I agree that people who don't cite the evidence are ignoring the evidence? Are you trying to say I'm doing that by pointing to relevant datasets which track the number of accidents and the probability of injury? If so, why are there accidents tracked in the datasets such that the rate can be calculated? This kind of contradicts the claim that I'm asking to ignore, but I definitely agree that other people are ignoring the data if that is what you are trying to say.


No, your argument is just ridiculous. The standard isn't and shouldn't be how much they get right. It should be what they get wrong and how they do that. I completely disagree with your point, and phrasing it obtusely just makes you obnoxious from a conversational standpoint.


My position is that we ought to include assertions backed by the evidence. Your views probably do have evidence that supports them. I want to see the evidence you are using, because I think that is important.

I'm not sorry that annoys you, because it shouldn't.


>Oh. So you don't like the data, because it disagrees with you. So you are trying to pretend I'm ignoring data, even though I'm linking to summary statistics which by their nature summarize the statistics rather than ignoring the statistics.

Oh the data is great. I like the data. I'd take the data out to dinner. It's completely besides my point, and you continuing to be obtuse and rephrasing things this way, is not only a strawman, but it's rude.

> Your views probably do have evidence that supports them. I want to see the evidence you are using, because I think that is important.

Not every policy decision is driven by data. Some are driven by reasoning and sensibility, as well as deference to previous practices. So your whole data-driven shtick is just that... a shtick.


You claim that I said that we should ignore evidence, but I didn't. I claimed that we should look at it.

You claim that I said that we should focus on the good, but I didn't. I claimed that we should look at the data.

Now I feel as if you are trying to argue that looking at data is wrong because not all decisions should be made on the basis of the data. This seems inconsistent to me with your previous assertion that my ignoring data was bad, because now you argue against your own previous position.

That said, uh, datasets related to bayesian priors support your assertions about deference in decision making. So you could, if you cared to, support that claim with data. It would contradict your broader point that I should not want to have data, but you could support it with data and I would agree with you, because contrary to your assertion I was making an argument for evidence informed statements. Your inference about whether I think the evidence leans should not be taken as an argument that I believe my positions would always be what was reached by looking at the evidence, because I don't think that is true. I'm obviously going to be wrong often. Everyone is.

Unfortunately, I think you lie too much and shift your goalposts too much. So I'm not going to talk to you anymore.


I never said you shouldn't want to have data. I said that the data isn't the only story, so appeals to data aren't dispositive. Data is clearly the only thing you are capable or willing to talk about. There isn't a point in furthering this conversation if you are just going to repeatedly misrepresent my comments and converse in this incredibly obtuse manner.

I also caught you editing out what was an excessively rude comment. I'm gonna pass on further conversation, thanks.


> I said that the data isn't the only story, so appeals to data aren't dispositive.

No, you didn't. You said please ignore all the times when I'm wrong in favor of when I'm right. This is what you actually said.

You seem to be using language incorrectly. You seem to me to be confusing "said" with "meant" and in this case it seems that what you "said" was very different from what you "meant" so much so that I'm strongly getting the impression that you are lying to me, but if you aren't - then it is because confusion on the difference between said and meant.

Words have meanings. They have meanings independent of your own desires, so whatever you meant to say - it doesn't matter at all - that is not what you said. Please ignore is fairly characterized as a request for ignoring things, because it maps to something like it would be pleasing if there was ignorance. Notice it is you who is claiming of me that it would be pleasing to me if there was ignorance. I'm not making such a claim - you said this - maybe you didn't mean this, but you absolutely said it.

I'm not unfairly characterizing your words: this is the actual implication of the words you used, because it is the implication of the meaning of the words - maybe it isn't the meaning you desired, not what you meant, but it is the meaning of the words.

To kind of highlight how extreme what you claim you said is from what you actually said is, notice that you imply belief about me when talking about me requesting ignorance. Yet now, in your claim about what you actually said, you imply a position you hold: that data isn't dispositive. You don't even have the same reference in what you said versus what you now claim to have said. You did not say what you claimed to say. If you meant that, you should have said that, but these things are very very far diverged.

If you want to state, of your own belief, that data isn't dispositive, you should state that. Instead you consistently referred to a false reference to my own beliefs. Notice, even when you tried to correct my interpretation, you did not switch the reference class to your own belief, you said things like "your argument is ridiculous" which is still talking about me - not your own position that data is not dispositive. So the reference class which is not targeting me, but the general properties you now claim to have said, is not truly there.

> I also caught you editing out what was an excessively rude comment. I'm gonna pass on further conversation, thanks.

As you can imagine, with your confusion about meant versus said, I've been finding you to be lying about both your own views and mine. So if I seem a bit rude, it is because I'm kind of assuming you are smart enough to already realize all these things. I don't mean to assume you are so hostile as to know all this, but then pretend not to, but it is just one of the valid explanations for your behavior. I tried to edit my comment to remove my frustration and I'm sorry you had to see it like that.

> incredibly obtuse manner.

Which leads to this. You stated that you think I'm being obtuse, but my first assumption was that you weren't just directly lying about what I was saying. Your claim, interpreted in the way you said it, not the way you meant it, is a lie about my belief. So I tried to make your words have a meaning that would make them true, not false. I tried to be charitable, but was confused, because it really seemed like you were lying about my beliefs given your statement. This wasn't me being obtuse. This was me trying to be charitable, but being very confused, because interpreted according to what you actually said - not what you meant - you were strictly speaking stating falsehood about my beliefs.


> We have many real world example of videos where other cars would have killed somebody and the Tesla stopped based on image recognition.

I think you and I must've watched a different video.


Yes I have also seen many videos where it makes mistakes. But also many where it prevented them.


The person above you has no idea what they’re talking about. There’s literally hundreds of people at Tesla whose job is QA and tools to support QA


And how does that change anything about my statements?

Yeah, they have QA. But for the problem they claim they’re solving (robotaxis) and speed of pushing stuff to customers (on the order of days) it vastly, vastly insufficient. And it lacks any safety lifecycle process regards - again, just look at the timelines. Even if you’re super efficient, you cannot possibly claim you can even such a basic things like proper change management (no, commit message isn’t that) or validation.


> it lacks any safety lifecycle process

completely demonstrably false

> speed of pushing stuff to customers (on the order of days)

this is also false and doesn't happen

> you cannot possibly claim you can even such a basic things like proper change management (no, commit message isn’t that) or validation.

you know absolutely nothing about the internal timelines of developments and deployments at tesla and to suggest it's impossible without that knowledge is just dishonest


> > it lacks any safety lifecycle process > completely demonstrably false

Head of AP, testified under oath, that they don't know what's Operational Design Domain. I'll just leave it at that.

> > speed of pushing stuff to customers (on the order of days) > this is also false and doesn't happen

Never ever Musk tweeted about .1 fixing some critical issues coming in next few days? I must live in a different timeline.

> > you cannot possibly claim you can even such a basic things like proper change management (no, commit message isn’t that) or validation. > you know absolutely nothing about the internal timelines of developments and deployments at tesla and to suggest it's impossible without that knowledge is just dishonest

Let's assume I have no internal information. If it looks like a duck, swims like a duck, and quacks like a duck, then it probably is a duck.


> Never ever Musk tweeted about .1 fixing some critical issues coming in next few days? I must live in a different timeline.

Lets say you have a baby which is being born. You tweet, birth in ten days! You can't then say, look: here is a tweet, it proves that baby development in tweeters actually has a several day lifecycle and moreover it proves that baby having mother's don't do proper pre-birth routines, because the tweet isn't the process that created the baby.

It is separate.

> If it looks like a duck, swims like a duck, and quacks like a duck, then it probably is a duck.

Right. So the fact that we have video evidence of internal processes, including QA processes, is much more like looking, swimming, and quacking like a duck much like having video evidence of a mother with a round belly for months before the tweet would be evidence that the babies don't take weeks to develop.

So when Elon has also tweeted that a launch was delayed, because of issues that were discovered - which does happen, as I'm sure you know if you follow his tweets as you imply - then that would be evidence congruent with the video evidence we have of QA processes existing within the company.


> and speed of pushing stuff to customers (on the order of days)

well, if you don't get the software pushed to the QA team (the customers), how else are they going to get it tested?


can we please stop with this disinformation? the customers are not the QA team.


Even AP is is "autosteer (beta)" so I sure do feel like part of their QA team. And it drives like a slightly inebriated human.

I do have high hopes that the work they've done on the FSD stack will make for a significant improvement to basic AP whenever they get merged (assuming it ever happens; it has been talked about for years). That'd be nice.


what do you call them? there's no way possible that they can make changes to the software and have them thoroughly vetted before the OTA push. Tesla does not have enough cars owned by the company driving on public roads to vet these changes. The QA team at best can analyze the data received from the customers. That makes the customers the testers in my book.


the fact that you don't know how tesla vets these changes, very extensively, prior to any physical car receiving the update reinforces that you have no idea what you're talking about

tesla does extensive, meaningful vetting of these updates. i'll let you do the research yourself so that maybe you can quit spreading misinformation


so well vetted that issues keep happening, so well vetted that a OTA "software patch" is raised to the level of automotive recall. if you call that misinformation, then, "boy, i don't know".


I think you are just correct that customers are acting as testers. I think the issue he is having isn't on that point, where you are right.

It is well known that issues happen despite extensive vetting. The presence of vetting does not exclude the absence of issues. For example, lets say there are 1000 defects in existence and your diagnostic finds 99% of them. So it finds 990 defects. So now there are 10 defects remaining that are not found. Next you have another detector that finds all defects. How many defect does it see? 10. It is usually going to see about ten defects. Your expectation given vetting is taking place should that be you will tend to observe about ten defects in this particular situation.

So lets say you are someone who is watching that second detector. You observe that you keep seeing defects. Probably, because of selection bias on results, you observe this with an alarming 100% rate of finding defects - the few times no defects happen it doesn't get shared for much the same reason we don't pay attention to drying paint. Can you therefore claim there is no diagnostic that is detecting and removing defects?

Well, not really, because you aren't observing the defects unconditional on the process that removed them. Basically, when you observe 10 defects, that doesn't mean there weren't 990 other defects that were removed. It just means you observed 10 defects.

So the actual evidence you need to use to decide whether there is a diagnostic taking place is evidence that is more strongly conditional on whether it is taking place. You need something that can observe whether the 990 are being caught.

In this case, we have video evidence of extensive QA processes. So that is much stronger than the evidence that defects show up, because defects show up both when we do have a vetting and we don't have a vetting. And each reasonable test case ought to be decreasing the chance of a particular defect that the test case is testing for.

For a much more thorough treatment on this subject check out:

https://www.lesswrong.com/tag/bayesian-probability#:~:text=B....

So that is basically why he disagrees with you.

Tesla definitely needs to root cause the defects that were found and make improvements on the existing vetting process but it is very obvious they do have these processes.


Andrej Karpathy was the AI lead for most of the project and he has talked about the general system design.

They have a set of regression tests they run on new code updates either by feeding in real world data and ensuring the code outputs the expected result, or running the code in simulation.

It does seem worrying that they would miss things like this.

Here’s a talk from Karpathy explaining the system in 2021:

https://youtu.be/aNVbp0WKYzY

Though I don’t recall if he explains the regression testing in this talk, there’s a few good ones on YouTube.


It's not even a bit surprising they'd miss things like this, IMHO. They do tests with a few (maybe even a lot of) intersections, but there are thousands upon thousands of intersections out there, including some where bushes are obscuring a stop sign, or the sign is at a funny angle, or sunlight is reflecting off the traffic lights, or heavy rain obscuring them, or plain old ambiguous signage...there's _bound_ to be mistakes. Human drivers make similar mistakes all the time.

I used to think that fact was going to delay self-driving cars by a decade or more, because of the potential bad press involved in AI-caused accidents, but then along comes Tesla and enables the damn thing as a beta. I mean...good for them, but I've always wondered if it was going to last.

I've been using it pretty consistently for a few months now (albeit with my foot near the brake at all times). I haven't experienced any of the above. Worst thing I've seen is the car slamming on the breaks on the freeway for...some reason? There was a pile-up in a tunnel caused by exactly that a month or so ago, so I've been careful not to use FSD when I'm being tailgated, or in dense traffic.


You know, there was an article on here last week about how there are only 4 billion floats, so just test them all.

There are only like 16 million intersections in the US. Why not test them all?


The thing is you already know everything you need to know about all 4 billion floats. Collecting data on every intersection in the US is quite difficult.

Tesla does however collect data on edge cases and then train their system to respond correctly. They can for example trail a collection network to identify things that might be obscured stop signs, then have the fleet collect a whole bunch of examples, hand label those samples, and roll this new data in to the training system. This is explicitly how they handle edge cases.

They can also create a new feature or network and roll it out in “shadow mode” where it is running but has no influence on the car, and then they can observe how these systems are behaving in the real world.

The real issue I guess is when they release a new feature without trialing it in shadow mode, or if they have gaps in their testing and validation system.


> Does anyone have insights on what QA looks like at Tesla for FSD work?

Yes. An army of Tesla owners perform the QA, in production.


Well first it goes to influencers that say it's perfect and good for stable release no matter what the car does!

But in all seriousness they do have some small team that validates then it goes to employees.


That's the thing about neural networks: any QA is going to be superficial due to their statistical black box nature.


Exactly. It's the same reason that no amount of unit tests can replace formal methods for safety-critical software, and we cannot apply formal methods to neural nets [yet].


That's not really true. Most safety critical software is tested without formal verification. They are just really really thorough and rigorous.

Formal verification is obviously better if you can do it. But it's still really really difficult, and plenty of software simply can't be formally verified. Even in hardware where the problem is a lot easier we've only recently got the technology to formally verify a lot of things, and plenty of things are still out of reach.

And even if you do formally verify some software it doesn't guarantee it is free of bugs.


What? Black box testing has plenty of techniques: https://en.wikipedia.org/wiki/Black-box_testing

Whether it's a neural network inside or not is completely irrelevant. That's why it's called "black box".


Practical neural networks operate in enormous parameter spaces that are impossible to meaningfully test for all possible adversarial inputs and degraded outputs. Your FSD could well recognize stop signs in your battery of tests but not when someone drew a squirrel on it with green sharpie.


Something a bit similar is clinical trial and it is accepted without problem.

You make a black box test on several thousands (sometimes only hundreds) patients, and if patients who received the drug perform better the patients who received the placebo, then the drug is usually accepted for commercialization.

Yet one isolated patient may be subject to several comorbidities, her environment could be weird, she could ingest other drugs (or coffee, OTC vitamins or even pomelo) without having declared it. In a recent past women were not part of clinical trials because being pregnant makes them very "non-standard'.


> Something a bit similar is clinical trial and it is accepted without problem.

Clinical trials also have strict ethical oversight and are opt-in. If clinical trials were like Teslas, we'd yeet drugs into mailboxes and see what happened.


First of all, clinical trials are typically longer and more thorough than you imagine, they span years. The fact that COVID vaccines were fast-tracked gives people wrong idea about it.

Secondly, even after the product hits the market the company is still responsible for tracking any possible adverse effects. They have a hotline where a patient or doctor can report it, and every single employee or contractor (including receptionists, cleaning staff, etc.) is taught to report such events through proper internal channels if they accidentaly learn about them.


> clinical trials are typically longer and more thorough than you imagine, they span years

I don't know where you get that, most clinical trials last 26 weeks, even in phase III.

and about "more thorough than you imagine" no, most are subcontracted to CROs and the way clinical trials are conducted is messy.

Below is story from the POV of a PI.

But similarly many patients complain about the way they are treated in visits and the lack of interest of the nurse/doctor who receive them.

https://milkyeggs.com/biology/why-are-clinical-trials-so-exp...


Your run of the mill computer program also "operates in enormous parameter spaces that are impossible to meaningfully test for all possible adversarial inputs and degraded outputs".


This is hardly similar as the state of a typical computer program can be meaningfully inspected, allowing both useful insights for adversarial test setups and designing comprehensive formal tests.


Right, if you consider the internal state, it is hardly similar. You talked about black box and QA though. Black box by definition holds the internal state as irrelevant, and QA mostly treats the software it tests as a black box, or in other words the tests are "superficial" as you call it.


Black box testing in typical software is, however, less superficial, because the tester can make inferences and predictions about what inputs will affect the results. When you're testing a drawing program, for example, you may not know how the rectangle tool actually works internally, but you can make some pretty educated guesses about what types of inputs and program state affect its behavior. If the whole thing is controlled by a neural network with a staggering amount of internal state, the connections you can draw are much, much more tenuous, and consequently the confidence in your test methodology is significantly harder to come by.


This seems to ignore that if you look inside the box at code you could understand it whereas looking at the activation values is unlikely to illuminate.


As a human being and motor vehicle operator of many decades I have done all of the above, multiple times (very infrequently), both on purpose and on accident. I’m looking forward to the days when self-driving vehicles are normal, and human drivers are the exception. Until then, I’m glad companies and regulators are holding the robots to a higher standard than the meat computers.


> As a human being and motor vehicle operator of many decades I have done all of the above, multiple times

Time to stop driving. That is not normal


It also does not know what one way street, do not enter, road closed, and speed limit signs are. Really, the only signs it appears to know about are stop signs.

As for their QA process, in 2018 they had a braking distance problem on the Model 3. They learned of it, implemented a change that alters the safety critical operation of the brakes, then pushed it to production to all Model 3s without doing any rollout testing in less than a week [1]. So, their QA process is probably: compiles, run a few times on the nearby streets (I am pretty sure they do not own a test track as I have never seen a picture of tricked out Teslas doing testing runs at any of their facilities), ship it.

[1] https://www.consumerreports.org/car-safety/tesla-model-3-get...


Teslas have understood speed limit signs since 2020.[1]

1. https://finance.yahoo.com/news/upcoming-tesla-software-2020-...


It uses maps for that.

I have a winding road near me with a speed limit of 35 mph, but 15 mph on certain curves as indicated by a speed limit sign. It ignores those speed limit signs and will attempt to make the turns at 35 mph resulting in it wildly swerving into the other lane and around a blind turn with maybe 30 feet of visibility. It has also attempted to do it so poorly that it would have driven across the lane and then over the cliff without immediate intervention.

Unsupported claims by a manufacturer that compulsively lies about the capabilities of their products except when directly called on it are the opposite of compelling evidence.


I'm talking about standard speed limit signs. You're talking about the signs that warn about sharp turns and advise maximum speeds. Yes it would be good if the software understood those signs, but that's a different issue.

Teslas definitely read speed limit signs. I've had mine correctly detect and follow speed limits in areas without connectivity or map data. It also follows speed limits on private drives (if there is a sign) and obeys temporary speed limit signs that have been put up in construction zones.


So they read some, but not all speed limit signs, and especially not the really important ones that inform you that you will be going dangerously fast if you do not read and follow them. That is criminally unacceptable.


Can you name any car manufacturer that has software to read those signs?


Waymo. Cruise. And I do not see how the presence or absence of features in other products has any bearing on the lacking safety characteristics of FSD.

Frankly, attempting to deflect by arguing that it is okay to release a defective safety critical product to unsuspecting consumers just because nobody else is willing to offer a similar product because they have some moral integrity is a stance that makes the executives at Ford presiding over the Pinto look like angels in comparison. All the Ford executives did was cover it up to avoid having to pay to fix it. At least they did not intentionally release a known defective and dangerous product just to recognize some revenue.


Neither of those companies have software that reads those signs and use them to choose cornering speeds. They both use high resolution 3D maps from previous lidar scans of the road.


>I'm talking about standard speed limit signs. You're talking about the signs that warn about sharp turns and advise maximum speeds.

Cognitive dissonance at it's finest.


These are not the speed limit signs you are looking for!


Not according the recall the NHTSA posted that is the subject of this entire thread....


There are many of such tests in the open.

There is even a former Tesla AI engineer that throws objects in front of the car on YouTube, as a demonstration.

The results are not glorious at all :| (trying to find the channel back if someone knows).

And random public tests too: https://www.youtube.com/watch?v=3mnG_Gbxf_w

This is a basic safety auto-braking. Just feels very wrong to even accept it goes into release.


This is not a former Tesla engineer. This is competitor who wants to discredit Tesla and sell its own solution.

The guy behind this is known to be untrustworthy, and many of the videos don't actually do what he claims. Notably he refused to release the videos that would prove his claims right.

The reality is that Tesla scores high on all the automated breaking test done by government. The driver however can override this, and that is exactly what is being done in this video.


So they scored high on whatever version of software was on the specific car tested by government at some point in time. Has any government done any testing at all on the version of software actually in use today?


The automated breaking is standard part of the software on every Tesla and is the software that was tested by multiple governments. This gives you certification. If Tesla would change or remove that software, it would be highly illegal. Tesla is actually notable that they are one of the only company that has implemented the highest standard of this safety feature on every car. These features are something Tesla is proud of have talked about quite a bit. They also mention that they have achieved the highest scores these evaluations.

The test done above has been replicated and the car does break in automated driving. And it gives loud warning to the driver even under normal driving operation and does emergency breaking.

Tesla assumes that if the driver hits the accelerator after the warning, the drive wants to accelerate based on the drivers judgment.

This is what this video shows, notably the person that made this video has refused to release prove that the car actually was in Autopilot, refused to provide evidence that the driver didn't hit the accelerator and refused to provide audio from inside the car so it can be verified that there was no warning sound.

In addition to that, the same person also claims things like 'millions of people will die if Autopilot isn't stopped' and that even under absurd assumptions is a legit insane thing to say.

So what is more likely, that Tesla did something incredibly illegal removing and incredibly important safety critical software that is standard in every Tesla OR that a competitor is doing a PR campaign where they deliberately set up a situation to film a video where they can create maximum damage to Tesla and then advocate for their own solution.

https://www.tesla.com/blog/model-y-earns-5-star-safety-ratin...

So the question is who do you believe, Euro NCAP or a competitor who made a sensationalist viral video.


You are probably thinking of: https://www.youtube.com/@AIAddict


I suspect they have thousands of tests, but ship code that passes only most of the tests...

That's what makes it unfinished...

It's never passed the 'drive from new York to LA with nobody touching the controls' test...


Note that this recall just means that there's a regulator-mandated patch, not that FSD is being removed.


Both of my cars patched last night with “Bug fixes” as the changelog. I don’t think I’ve seen an update that only said that before. I suspect that was this “recall”. If that’s the case, then the recall (patch) is probably mostly done by now.


I've had 4-6 "misc bug fixes" patches since 2016, just FYI.


If the changelog to my car's safety critical system was limited to "misc bug fixes", I would never touch it again. Either have extreme transparency into the system, or don't have the system.

I leverage this complaint against my own car. The AEB radar just stopped passing start up self tests for a while, and I had no idea why, and could not find out without going to the dealership. While waiting to be able to do that, it started working again. I don't consider it a backup, or a helper or anything. It's an independent roll of the dice for an upcoming crash that I already failed to prevent. If it does anything in that case then that's probably an improvement, but I don't exactly trust it to save my bacon.


I had that update message, too, though I thought I was just installing that “adjustments to the steering wheel warmer” update. I don’t have FSD.


Crash -> fire -> steering wheel is warmed.

Could still be related.


For sure. Just another genius move in Elon’s 4D chess game.


Yes, the description of the remedy in the recall says:

The remedy OTA software update will improve how FSD Beta negotiates certain driving maneuvers during the conditions described above, whereas a software release without the remedy does not contain the improvements.


I generally root for Tesla as a former underdog who lit the fire under the asses of stagnating manufacturers, but, seeing videos of FSD in action, I'm fully on the side of people who think that calling it FSD should be considered fraud.

Given how much time and data they had so far, and the state it's in, it really makes news like Zoox getting a testing permit for public roads, without any manual controls in the vehicle, seem crazy irresponsible and dangerous. Is it possible that they are just that much better at cracking autonomous driving?


Unlike Tesla, Mercedes will take responsibility if level-3 autonomous system is malfunctioning. If I ever get an autonomous vehicle this will be the main deciding factor.


Taking legal liability ought to be the truly defining difference between L2 and higher systems. Anything L2 (including Tesla's horribly named FSD) needs to be clearly labelled as a driver assistance feature. Until the manufacturer takes legal responsibility features should be stuck as assistance and agencies like the FTC should be enforcing the terminology much more carefully.


what videos are you talking about? when i watch recent FSD videos (there have been regressions recently) on youtube the car generally drives itself. if you showed those videos to someone in the 90s they would without hesitation remark "that car drives itself!" there is zero ambiguity about this. nobody said that a self driving car could never make mistakes. the 90s person would definitely notice those mistakes but would come to the conclusion that we had pretty much done it and that we are going to have fully reliable self driving cars shortly. fsd is the largest experiment of its kind, its one of the coolest software projects of all time. how hackernews users are blind to that is beyond me.

there are hundreds of videos that directly contradict your comment. the funny thing is that on reddit and on hackernews as well as all mainstream news outlets, i have never, not even once, seen one of these videos posted or even linked to or even referred to. its like they dont exist despite the fact that there are hundreds of them just a click away.

you can say that we shouldnt be experimenting on the roads. thats a matter of opinion. but to say that fsd isnt the most advanced and capable system available, to say that its a fraud, to say that its failed is all objectively false. just look at where they started and look at one of the latest videos.

and in what may be a world first, i will link to a video here on hackernews.

https://www.youtube.com/watch?v=mHadhx3c840&ab_channel=AIDRI...

the same guy, who drives FSD every day, is deeply involved in reporting bugs and publishes videos about FSD. by any metric that matters, his opinion is worth more than yours or anyone else on hackernews or reddit.

https://www.youtube.com/watch?v=Nvvhmc837Tw&ab_channel=AIDRI...


The key phrase being 'generally'. Really advanced L2 assistance features that bleed into L3+ abilities like FSD work right up until they don't, then they hand control back to a driver who is not situationally aware enough to safely operate the vehicle. "Full Self Driving" is stretching those three terms far beyond the breaking point and Tesla should absolutely be slapped down by the FTC for false and misleading advertising.

If Tesla wants to take legal responsibility for the car while FSD is engaged they call it full self driving. Until then it's a dangerous beta test pedestrians and other road users didn't sign up for.

I like my tesla, but autopilot drives like a poorly trained teenager and I don't ever use it. FSD isn't much better and needs to be tested with proper rigour by trained employees of Tesla, not random owners.


its true that the name is not accurate. but in reality the name hasnt caused anyone to buy the product because they believed they could take a nap behind the wheel and tesla doesnt deceptively market the product as such. the name is inspirational, maybe one person in the world was taken by surprise by it after buying it. its just a non-issue in the real practical world. and it has nothing to do with the substance of the self-driving conversation. people just splice it in and usually not long after they add in some emerald mine rumors. i would like to just talk about the matter at hand which is self driving.

i have never heard of someone who characterized autopilot as a poorly trained teenager. not sure how thats possible since its just adaptive cruise control for the most part. and poorly trained teenagers can and do drive every day. i think we are past the point of accusations of fraud.


As posted on the Tesla subreddit [1]:

> Remedy: Tesla will release an over-the-air (OTA) software update, free of charge. Owner notification letters are expected to be mailed by April 15, 2023. Owners may contact Tesla customer service at 1-877-798-3752. Tesla's number for this recall is SB-23-00-001.

[1] https://www.reddit.com/r/teslamotors/comments/113wltl/commen...


Why even bother with a letter?

Surely the letter could be displayed on screen like most other tech displays some text of sorts before an update occurs.

Are these letters designed to satisfy the legalese types, or is paper still required to make sure tech companies dont make post update changes to the letter contents?


They are required to. It's written in law: https://www.ecfr.gov/current/title-49/subtitle-B/chapter-V/p...


A lot of car owners don't have reliable internet at home and live outside of cell tower coverage (mountainous areas, especially).

Statistically these are not likely to be Tesla owners, but it's about making sure people know about the issue and how to fix it.


When GP says "the update could be displayed on-screen," they are referring to the car's user interface, and not a secondary device like a computer or cell phone.

Incidentally, when you point out the issue of missing a theoretical electronic version of the notice because of spotty Internet connectivity, in such a scenario, they wouldn't be able to receive the OTA update either.


When they get the letter in the mail, they'll know to take their car to the nearest library or Starbucks or wherever with wi-fi to install the update.


So Tesla vehicles dont ping notices to other tesla vehicles passing out update info, in much the same way like the covid smartphone bluetooth vicinity checks worked. Thats surprises me, I think even Windows now lets an updated computer notify and update other windows pc's that are more off grid, its called delivery optimisation, but should work for local hotspots in the wild.


Cars driving past each other rarely are in range for long enough to communicate meaningfully.

Locations where multiple Tesla vehicles are likely to congregate - Superchargers and Tesla retail/service locations - already have good internet connectivity.


The amount of hyperbole and completely uninformed takes around FSD is eye opening.

I’ll completely agree that “Full self driving” is a misleading name, and they should be forced to change it, full stop.

That being said, it’s exceptionally clear that all the responsibility is on you, the driver, while you are using it. The messaging in the vehicle is concise and constant (not just the wall of text you read when opting in.) Eye tracking makes sure you are actually engaged and looking at the road, otherwise the car will disengage and you’ll eventually lose the ability to use FSD. Why is there never a mention of this in any coverage? Because it’s more salacious to believe people are asleep at the wheel.

Is it perfect? No, though it’s a lot better than many people seem to want the world to believe. It’s so easy to overlook the fact that human drivers are also very, very far from perfect.


>That being said, it’s exceptionally clear that all the responsibility is on you, the driver

Virtually every study ever done on human-machine interaction shows that users will inevitably lose reaction time and attention when they are engaged with half automated systems given that constant context switching creates extreme issues.

Waymo did studies on this in the earlier days and very quickly came to the conclusion that it's full autonomy or nothing. Comparisons to human performance are nonsensical because machines don't operate like human beings. If factory floor robots had the error rate of a human being you'd find a lot of limbs on the floor. When we interact with autonomous systems that a human can never predict precision needs to far exceed that of a human being for the cooperation to work. A two ton blackbox moving at speeds that kill people is not something any user can responsibly engage with at all.


>....“Full self driving” is a misleading name, and they should be forced to change it, full stop.

Problem and solution.

Nothing more need be said.


This one issue is one of the primary reasons Google ended up deciding not to acquire Tesla really early on. Google's been way ahead of them for a long time and their engineers were extremely aware of how reckless this type of marketing from Musk was

But the marketing worked. They sold the dream to people and only got sued a couple times and still got the most subsidization of any automaker


Remains to be seen what the OTA patch will actually do. If they could make FSD work correctly they would have _already_ done it. So, my guess is more smoke and mirrors to mislead the NHTSA, and then NHTSA will come down with the ban hammer on FSD and require them to disable it entirely.


"Tesla will deliver an over-the-air software update to cars to address the issues, the recall notice said"

A bit of a sensational title compared to what this really is.


No it isn't. Recall means "The manufacturer sold you something they should not have, and are legally required to remedy that problem". It has NOTHING to do with the action needing to be taken. It has NOTHING to do with the product category. Lots of recalls, ie for food or child toys, basically say "throw it out".

The people who keep making this "Recall means go back to the dealer" claim is simply down to them never paying attention to all the recalls in the world that don't make it to their mailbox like car recalls do.


That's the same for other manufacturers. It's called a recall in legal terms, in the sense that the original product as used is not safe and needs to be changed. The changing can luckily be done OTA instead of driving to a service center to plug it in first.


Does a recall like this actually mean owners return their car to Tesla for modification? Or would it be an over-the-air update to remove FSD?


> The auto safety regulator said the Tesla software allows a vehicle to "exceed speed limits or travel through intersections in an unlawful or unpredictable manner increases the risk of a crash." Tesla will release an over-the-air (OTA) software update, free of charge.

https://www.reuters.com/business/autos-transportation/tesla-...

Sounds like it's just a patch release with regulators involved.


I'm not sure how to reconcile that statement with how FSD beta works. Fine, the speed limit thing is easy to fix-- just cap the max speed to the speed limit.

But the "[traveling] through intersections in an unlawful or unpredictable manner" is inherent to the FSD beta. Most of the time it does "fine", but there's some intersections where it will inexplicably swerve into different lanes and then correct itself. And this can change beta-to-beta (one in particular used to be bad, it got fixed at some point, then went back to the old swerving behavior).


If they accepted a software update, I would assume it would be about things that are deliberately programmed in, rather than just fundamental limitations? But I agree it's a bit odd. The longer description says

> The FSD Beta system may allow the vehicle to act unsafe around intersections, such as traveling straight through an intersection while in a turn-only lane, entering a stop sign-controlled intersection without coming to a complete stop, or proceeding into an intersection during a steady yellow traffic signal without due caution. In addition, the system may respond insufficiently to changes in posted speed limits or not adequately account for the driver's adjustment of the vehicle's speed to exceed posted speed limit. [https://www.nhtsa.gov/vehicle/2020/TESLA/MODEL%252520Y/SUV/A...]

"entering a stop sign-controlled intersection without coming to a complete stop" sounds a lot like the thing they already issued a recall over in Janary 2020 (further down the same web page), so it seems a bit odd that it would still be an issue. And "due caution" to yellow lights seems like it could just be tuning some parameter. On the other hand, failing to recognize turn-only lanes sounds more like a failure of the computer vision system...


I believe the stop sign adjustment is to increase dwell time at empty intersections. i.e. stop and count to two before proceeding.


This is what I'm most interested in.

I'm hopeful about Tesla FSD, and don't think it necessarily needs to be perfect, just significantly better than humans. So I'm rooting for Tesla FSD/Autopilot overall. I just don't see how given the findings there is a solution without removing removing the entire FSD feature.


Almost all of these "recalls" that make the news are just software patches. The only difference between these and a normal patch is that customers get an email about it.

I can understand why people might think that all these recalls require going back to the shop, that's how most legacy makers work to this day.


The solution being (supposedly) easy doesn't discount the severity of the issue. Owners must be informed and aware.


It's also the word "recall", as opposed to "urgent patch", that makes people think that the car is going back to manufacturer.

Non-Tesla automakers are not "legacy".


Seems like they should come up with a new phrase to distinguish between making modifications to the car at a dealer or service provider vs. a downloadable update, right?


Yes, this is just a historical term that nobody bothered to change.

We have to remember that Tesla was the first company to really do this and even today almost no other company does it. Most can update some parts of their system, but almost non have anywhere close to the integration Tesla has.

So for 99% of recalls, it is a physical recall, its just Tesla where most of the time it isn't.


I had the same question. I think this is the original report: https://static.nhtsa.gov/odi/rcl/2023/RCLRPT-23V085-3451.PDF

On Page 4 it mentions the "Description of Remedy" as an OTA update. Gives a new meaning to a recall!

https://howtune.com/recalls/ford/ford/1980/

Software > Stickers


Why does the media not make this clear in the headline?


Because it gets more clicks. Reading about how Tesla will lose lots of money because they have to recall 350k vehicles is juicy story, specially if its about removing a major feature, a slight software update to that feature is boring.


Because Tesla does not pay for advertising in media outlets. So many media outlets tend to have it in for Tesla.


They hate Musk ever since he took twitter away from them.


That was my first thought.

Also what percentage of people who paid for "FSD" needed their car recalling?


This is 100%. It's a required software update on all the FSD vehicles.


Recall = Over the Air Update.

Not nothing, but not as big as CNBC is making it out to be.


Splitting hairs on the definition of "recall" of a potential deadly flaw in 362,758 automobiles is silly. But is is the correct word for that industry when there is one.

I worked at a Oldsmobile dealer in the 80s and fixed all kinds of issues on cars that were "recalled" and that is what we called it way back then and long before it. Some were trivial and others were serious safety issues.

https://www.kbb.com/car-advice/what-do-i-need-to-know-about-...


The news headlines coming out are also interesting. 'Tesla Recalls' vs 'Tesla Voluntary Recalls' are technically both true, however the former is a much more popular headline while being less precise.


If Tesla are not careful with this, drivers of other vehicles will have serious reservations being anywhere around a Tesla. I have to say, I already do.

I will not stay behind or next to a Tesla if I can avoid it. I'll avoid being in front of one if the distance is such that I cannot react if the thing decides to suddenly accelerate or, while stopping, not break enough or at all.

In other words, I have no interest in risking my life and that of my family based on decisions made by both Tesla drivers (engaging drive-assist while not paying attention, sleeping, etc.) or Tesla engineering.

Will this sentiment change? Over time. Sure. If we do the right things. My gut feeling is program similar to crash testing safety will need to be instituted at some point.

A qualified government agency needs to come-up with a serious "torture" test for self-driving cars. Cars must pass a range of required scenario response requirements. Cars will need to be graded based on the result of running the test suite. And, of course, the test suite needs to include an evaluation of scenario response under various failure modes (sensor damage, impairment, disablement and computing system issues).

I am not for greatly expanded government regulation over everything in our lives. However, something like this would, in my opinion, more than justify it. This isn't much different from aircraft and aircraft system certification or medical device testing and licensing.


I was driving behind a Tesla which I can only assume was on FSD mode down a narrow side street coming up to a turn to a busy intersection. The car/driver almost drove straight into cross traffic, ended up blocking a lane without moving for like 15 seconds before it turned right (while signalling left) and almost crashed again into the oncoming traffic coming the other way. Seriously unsafe


I found an interesting phrase in the official report. NHTSA formally says that not respecting 'local driving customs' is a defect:

     ...the feature could potentially infringe upon local traffic *laws or customs* while executing certain driving maneuvers...
Do they want Tesla to create a DB with 'allowed' and 'locally faux pas' driving maneuvers? It sure reads like they do.


Yes, and good that they do.


I thought emperically, statistically, the number of crashes from FSD was drastically lower than human drivers. Was this number hogwash or is this just lawboys saying the number must be 0 before it's allowed on the road?

If so, that's pretty crazy and people will die because of this decision.


The number was problematic for various reasons:

The usual comparison is vs. all cars on the road, but Teslas are comparatively new cars with a lot of recent and expensive safety features not available in older cars. They’re also likely to be maintained better. On top, they’re also a driven by a different demographic which skews accident statistics.

Teslas autopilot can only be enabled in comparatively safe and simple circumstances, yet the comparison is made against all driver in all situations. When autopilot detects a situation it can’t handle, it turns off and hands over to the human who gets a few seconds warning. Human drivers can’t just punt the issue and then crash.

Tesla FSD may be safer than Human drivers for the limited set of environments where you can use it, but last time I checked, the numbers that Tesla published are useless to demonstrate that.


> Teslas autopilot can only be enabled in comparatively safe and simple circumstances

This is incorrect. FSD can be enabled everywhere from dirt road to parking lots, even highways and dense urban environments like NYC. There is no geofence on FSD, you can turn it on anywhere the car can see drivable space.


Does it enable in heavy rain or snow, on ice, fog? Does it work in storm conditions with leaves and objects blown over the street? Does it work in confusing traffic situations, invalid signage, …?


I didn’t say it works in those scenarios, I said it lets you enable it.

It enables in snow and rain yes. The only time it refuses to engage is in extreme weather that obstructs the cameras.

It gets confused a lot in heavy traffic but it will attempt anything.


You can’t compare FSD crashes / mile with human data for a couple of reasons.

The primary one is:

1) FSD turns itself off whenever things get too difficult or complex for it. Human drivers don’t get to do this. Recent crowdsourced data suggests a disengagment every 6 miles of driving: https://twitter.com/TaylorOgan/status/1602774341602639872

If you eliminate all the difficult bits of driving I bet you could eliminate a lot of human driver crashes from your statistics too!

A secondary issue, but relevant if you care about honesty in statistical comparisons is

2) The average vehicle & driver in the non-Tesla population is completely different. Unless you correct for these differences somehow, any comparison you might make is suspect.


Dumb question: why would anyone sign-up for a beta program, that could kill you if gone wrong?

(maybe people just don't think through ramifications)


How would the FSD beta program kill me?

I realize that not everybody is in agreement, but I personally use the FSD beta while remaining fully in control of the vehicle. I steer with it, I watch the road conditions, I check my blind spots when it is changing lanes, I hit the accelerator if it is slowing unexpectedly, I hit the brakes if it is not...

You know, basically behaving exactly as the terms you have to agree to in order to use the FSD beta say you are going to behave.

When I look at the wreck in the tunnel (in San Francisco?) a few months ago, my first thought is: how did the driver allow that car to come to a full stop in that situation? Seriously, you are on a highway and your car pulls over half way out of the lane and gradually slows to a complete stop. Even if you were on your phone, you'd feel the unexpected deceleration, a quick glance would show that there was no obstruction, and the car further slowed to a complete stop.

FSD is terrible in many situations, that is absolutely true. But, knowing the limitations, it can also provide some great advantages. In heavy interstate traffic, for example, I'll usually enable it and then tap the signal to have it do a lane change: I'll check my blind spots and mirrors, look for closing traffic behind me, but it's very nice to have the car double checking that nobody is in my blind spot. There are many situations where, knowing the limitations, the Tesla system can help.


It would kill you by messing up...? Your question is insane. It kills you by fucking up and you don't have time to correct the car.

Good for you that you apparently know how to "use it correctly" or whatever but that's not exactly the point here.


Sure, you are in control of your Tesla but you are obviously not in control of other Teslas. Someone else who isn't as diligent as you or who doesn't care about it's limitations will happily crash head-on into you, blissfully unaware of their own ignorance.


I'm not in control of any other cars on the road. Many of those cars are driven by drunk, distracted people who cause thousands of deadly accidents every single year.


> how did the driver allow that car to come to a full stop in that situation? ... Even if you were on your phone, you'd feel the unexpected deceleration, a quick glance would show that there was no obstruction

In all honesty, I'd probably spend a few seconds trying to figure out "what does the car see that I don't?" and let it come to a stop. Maybe it's a catastrophic system failure that I can't see. Maybe it's an obstacle headed into my path that hasn't gotten my attention. If my reflexive reaction is supposed to be to distrust the car's own assessment of the situation, then the system isn't good enough yet.


>How would the FSD beta program kill me?

By crashing in a way that is fatal to your life


How can you be sure you aren't just being overconfident about your ability to prevent an accident?


... 35 accident-free years? :-). Obviously I don't want to be complacent, and I need to be on top of aging related decline of abilities...


It is a dumb question, but that's the best kind.

"could kill you if gone wrong" is nearly every choice in life. The relevant metric is risk of death or other bad outcomes.

Flying in a commercial airliner could kill you if things go wrong. Turns out it's safer than driving the same distance.

Eating food could kill you if things go wrong (severe allergy, choking, poisoning) yet it's far preferable to not eating food.

Similarly, Tesla's ADAS could kill you if things go wrong but the accident rate (as of 2022) is 8x lower than the average car on the road, and 2.5x lower than a Tesla without ADAS engaged.

Don't let the perfect be the enemy of the good.


I guess they think that not being in the beta program also has a non-zero chance of killing you. Either that or they are willing to exchange lower safety for greater pleasure/convenience (which everyone is willing to do sometimes, otherwise they would stay inside 100% of the time). I wouldn't use it myself, however.


People drive motorcycles, which seems much more dangerous. I don't think the risk of self driving is that high but it does seem stressful.


Yeah, people don't understand how dangerous motorcycles are. The fatality rate for motorcycles in the US is 23 per 100 million miles traveled. For cars that number is 0.7. Assuming you ride 4,000 miles per year for 40 years, that gives you a 4% chance of death. The chance of being crippled or receiving a brain injury is probably higher than that.


I'm creating a new bullet proof vest.

Would you like to participate in the beta program, to test stoping bullets being shot at you?


That feels like a very different scenario. A successful test would still leave me winded and bruised. The odds of failure are much higher than 1 in 100,000 per year. I also have no need for a bullet proof vest so why would I be interested in testing it - what's the benefit to me?


Virtually everyone will take the vest that needs improvement over none at all


Virtually everyone will opt out of the test, because the choice is not, "I'm going to shoot a gun at you as a test. Do you want the experimental bullet proof vest or no bullet proof vest?"


Not if the vest suddenly decides to malfunction and explode or otherwise do something that kills you. Good lord there is some poor logic happening in this thread.


“Think of how stupid the average person is, and realize half of them are stupider than that.”

― George Carlin


Meanwhile, the only actual L3 capable car is a Mercedes. Plenty of people will argue that AP is actually superior, but I don't think there's any way to say that for sure, given the tight constraints Mercedes has put on their driver assistance in order to lower the liability enough to make L3 not a bankruptcy event. For certain, however, they are more confident in their technology than Tesla is, otherwise Tesla would release an L3 capable car too.

To be fair, though, Tesla has no sensors other than cameras, and I believe the Mercedes has a half dozen or so, several radars and even a lidar.


> To be fair, though, Tesla has no sensors other than cameras, and I believe the Mercedes has a half dozen or so, several radars and even a lidar.

This doesn't mitigate Tesla's gross ethical violations in letting this thing loose, it makes them worse. Tesla knew that cameras alone would be harder to make work than cameras plus radar plus lidar, and they shouldn't be lauded for attempting FSD with just cameras. It's an arbitrary constraint that they imposed on themselves that is putting people's lives in danger.


Remember that big announcement musk did where he stated, that going forward, Teslas would not use any additional sensors to aid the video---to the point of laughing at every other company that was still using them. Yeah.


Basically criminal negligence to anyone that actually gets hurt by the FSD beta


This is limited to certain pre mapped roads, under 40 mph and requires a car in front to follow. This in no way compares to Tesla's Autopilot system.

Edit (because HN throttling replies): > Mercedes is being responsible about their rollout

You can always succeed if you define success. If rolling out something that doesn't do much of anything is success, success? Oh well. Less than 20 people have died related to Autopilot in ~5 billion miles of travel. Zero deaths is unattainable and unrealistic, so "responsible" and "safe" is based on whatever narrative is being pushed. 43k people died in US car accidents last year, roughly half a million deaths in the entire time Tesla has offered some sort of driver assistance system called Autopilot.

How many deaths would be be okay with Mercedes' system where is would still be "responsible"? Because it won't be zero. Even assuming 100% penetration of Automatic Emergency Braking systems, it's only predicted to reduce fatalities by 13.2% (China specific study), and injuries by a similar number.

TLDR "Safe" is not zero deaths. It is a level of death we are comfortable with to continue to use vehicles to travel at scale.

https://www.tesladeaths.com/

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7037779/


Mercedes is being responsible about their rollout, so people like you jump to the conclusion that their tech is worse. For my part, I see Mercedes taking liability for accidents that happen and think that they must have a lot of confidence in their tech to roll it out at all under those terms.

I would not be surprised to see Mercedes beat Tesla to safe fully automated driving because they took it slow and steady instead of trying to rush it out to legacy vehicles that don't have the right sensor arrays.

Edit: It's weird to reply to a comment by editing your own. It feels like you're trying to preempt what your interlocutor is saying, rather than reply.


Isn't 40mph a regulatory limit?

Do you have any information that it'll continue to be 40mph even where not required to be?

I wouldn't be surprised in the answer is yes, but I've been paying attention and haven't come across any yet.


It is in no way comparable to Tesla's Autopilot system. Mercedes has an actual feature while Tesla recklessly gambles with other peoples' lives to juice their stock price.

If Tesla believes that their feature is safe, then let them take legal liability for it.


The law does not require they take liability, so they shouldn’t. If you don’t like the law, change the law. If you don’t want to use a driver assist feature, don’t. But the data shows Autopilot is robust and regulators allow its continued sale and operation.


Legal isn't ethical. Musk is recklessly gambling with other people's lives.

I don't have a choice not to use this feature. The person driving their Tesla death machine on the roads with me already made that decision for me.


>If you don’t want to use a driver assist feature, don’t. But the data shows Autopilot is robust and regulators allow its continued sale and operation.

uhh,, this is a thread where the federal gov't just forced Tesla to patch the feature, and so far we don't even know what the patch does...


I've used it in Germany over the Christmas holidays. In 8 hours of driving it was only available for 12 minutes.


It's probably only worth it if you have a commute in heavy traffic. If you're stuck in traffic on the Autobahn every morning, then it could be useful.

Otherwise, not really.


Calling it L3 capable is like saying Tesla is Full Self Driving.

Neither is true, both are marketing.

> but I don't think there's any way to say that for sure

I mean yes there is. If you want to have an object test where you drop a car anywhere in the world and see if it can get somewhere else, then clearly one is more useful then the other.

In basketball they say 'the best ability is availability'. In terms of that AP is in a totally different dimension. AP has been driven for 100s of millions of miles by know, it must be a crazy high number by now. Mercedes L3 system has barley driven at all, its available in very few cars.

The only way you can reasonably compare the Mercedes L3 system is if you limit the comparison to the extremely limited cases where the Mercedes L3 system is available. If you compare them there, I would think they aren't that different.

> otherwise Tesla would release an L3 capable car too

No, because making something L3 in a extremely limited selection of places is simply not something Tesla is interested in doing. Doing so would be a lot of work that they simply don't consider worth doing when they are trying to solve the larger problem.


If Tesla's work is actually going somewhere, then they don't need to change the engineering to get L3 in specific places. They can have a list that gets added to over time.


Actually it would be a lot of work doing that. Even if not just work on the software itself. Something like that would cause a lot of work in literally all over the company.

Also, they don't really have an inattentive to do that, so why should they.


> literally all over the company

Why? The design is locked in, the manufacturing doesn't change, you don't need the designers to do much... I would expect you need a handful of people, half of which are lawyers.

> Also, they don't really have an inattentive to do that, so why should they.

I'm pretty sure almost all Tesla owners would like the ability to stop looking at the road some of the time. That amount of customer satisfaction is not a motivation for the company?


That Tesla was allowed to test in production without any legal liability with real human lives mostly because of Musk's personal influence is a disgrace


> That Tesla was allowed to test in production without any legal liability with real human lives mostly because of Musk's personal influence is a disgrace

I am unsure what"allowed" means in that context. They just did it. The lawsuits are coming, surely?

Sometimes it is better to ask for forgiveness than for permission. This was an audacious example. Time will tell if there is anything that will stop them.


I mean, "ask for forgiveness than permission" for an AI conducting a 1 ton machine that carries live human beings borders on psychopathy.


That is my point. It is audacious, and they must be stopped.


This is a PR stunt from Mercedes, because in practice their L3 system is not usually available.


It's available in bumper to bumper rush hour traffic on freeways, and they take liability for any accidents that happen while it's on. That's exactly the kind of system and guarantee that could really have a positive impact on someone's commute.


No, it is rarely available even in those conditions.


Don't LIDAR/radar sensors (I'm not familiar with exactly what the options are) have benefits that vision doesn't have, like working in poor lighting/visibility? Why would Tesla move away from these sensors?


Lidar has advantages over cameras but it also has some downsides. Sunlight, rain, and snow can interfere with the sensors, as can other nearby lidar devices (though de-noising algorithms are always improving). There are also issues with object detection. Lidar gives you a point cloud, and you need software that can detect and categorize things from that point cloud. Because lidar is new, this problem hasn't had as much R&D put into it as similar problems in computer vision.

Then there's the issue of sensor fusion. Lidar requires sensor fusion because it can't see road lines, traffic signals, or signs. It also can't see lights on vehicles or pedestrians. So you still have to solve most of the computer vision issues and you have to build software that can reliably merge the data from both sensors. What if the sensors disagree? If you err on the side of caution and brake if either sensor detects an object, you get lots of false positives and phantom braking (increasing the chance of rear-end collisions). If you YOLO it, you might hit something. If you improve the software to the point that lidar and cameras never disagree, well then what do you need lidar for?

I think lidar will become more prevalent, and I wouldn't be surprised if Tesla added it to their vehicles in the future. But the primary sensor will always be cameras.


Because LIDAR is expensive and available in limited quantities. There's no way they could sell the amount of cars they are selling right now if each one came with a LIDAR.

Camera modules are cheap and available in huge quantities.


>> To be fair, though, Tesla has no sensors other than cameras, and I believe the Mercedes has a half dozen or so, several radars and even a lidar.

Pf. Tensors beat sensors.

Just you wait for another garbanzillion miles driven. Then you'll see.

/s


Makes me think karpathys "bet" on vision being the only sensor was maybe misguided


Again, people need to understand this L3 stuff is for an extremely, extremely limited amount of situations.

Tesla software is used far, far, far more, in far, far more situation. Even compare those to things is kind of silly.

Its like comparing a system designed for only race tracks with Honda Civic. They are simply not designed for the same thing.

If Mercedes achieves L3 in all the places Tesla now allows AP (or FSD Beta) then that would prove the 'bet' on vision wrong.

Until then, nobody has proven anything.


Radar could help so much in thick fog and rain...


Yup. You need an AGI behind that vision for that premise to work.

Even a fruit-fly class AGI would do it.


Flies hit windows and cars. Even much more complex animals cannot manage effectively moving in a complex environment without hitting each other (like flocks of sheep).

There's no data that would support that anything than human level AGI is required to drive cars with how current infrastructure looks like.


> Even much more complex animals cannot manage effectively moving in a complex environment without hitting each other (like flocks of sheep).

No. You have to force flocking animals into extreme circumstances to have them start crashing,.

Sounds like the Tesla cannot manage that. Not even "bird brained".


I wonder what severity / frequency of incidents or regulator awareness required them to actually come out and say that they're issuing a recall, rather than just quietly putting it into an upcoming release like they probably would otherwise do?


> A new car built by my company leaves somewhere traveling at 60 mph. The rear differential locks up. The car crashes and burns with everyone trapped inside. Now, should we initiate a recall? Take the number of vehicles in the field, A, multiply by the probable rate of failure, B, multiply by the average out-of-court settlement, C. A times B times C equals X. If X is less than the cost of a recall, we don't do one.


As far as I know pretty much all update to critical system will always be told to the government and cause a 'recall'. But I'm not 100% on what the actual regulatory requirement are.


That's 4-5B USD worth of fraud. This thing could actually be quite dangerous for TSLA. One class-action lawsuit around this and they're bankrupt.

Edit: Dang, you're all right, they could eat this and still be alive.

My whole sentiment comes from FSD being their big shot (and everything that comes with that, like the robotaxis and whatnot). Without FSD, they're "just another car company" and the market is already thriving with good alternatives (Audi's EVs are jaw dropping, at least for me). Excited to see what they announce on March 1st, though.


What would happen to the other ~15B they have in the bank?


There's many people that still have the perception that Tesla is a small fish waiting to get gobbled up by legacy automakers. But every year that becomes further from the truth. A company with $80 billion revenue and 100,000+ employees is not dying from a single lawsuit.


You think $5B would bankrupt Tesla?


Couldn't they issue new shares under their own name and sell them in the open market to get dozens of billions ?


Probably but it's more likely they would just pay it out of their $20B in liquid assets.


That’s not how the law works.


It's irrelevant anyway. None of those cars will be retrofit with Hardware 4:

https://electrek.co/2023/02/15/tesla-self-driving-hw4-comput...

So much for the "full self-driving" fantasy all those people paid for but will not get.


So are they recalling, or releasing a software fix?


I think this is a legitimate question, and a reflection of how the NHTSA needs to adjust their wording for modern car architectures.

It's technically a recall, but it's fixed with an OTA update. But the fact that any "defect" that can be fixed with an OTA update is called a "recall" is confusing to consumers and contributes to media sensationalism.

There absolutely needs to be a process overseen by regulators for car manufacturers to address software-based defects, but the name would benefit from being changed to reflect that it can be done without physically recalling the vehicle to the dealer.


It's technically a recall, but it's fixed with an OTA update. But the fact that any "defect" that can be fixed with an OTA update is called a "recall" is confusing to consumers and contributes to media sensationalism.

Would it be sensationalism if the same recall happened to cars not capable of the OTA update?

Because I think there should be media "sensationalism" about these types of issues, regardless of whether they can be fixed with physical or OTA repairs.


I get why Elon Musk objects to the use of the word recall, but I don't understand why anyone else is going along with that.

> But the fact that any "defect" that can be fixed with an OTA update is called a "recall" is confusing to consumers

What is confusing about this to you?

> contributes to media sensationalism

Why do you think this recall is more sensationalistic than many of the other recalls issued recently? Automotive recalls often address serious issues with cars. "Fix this or you could die" is a common enough theme when, you know, if you don't fix it you could die.

> but the name would benefit from being changed to reflect that it can be done without physically recalling the vehicle to the dealer.

Why? Information related to how, when, and where the recall can be addressed is contained in the text of the recall notice, same as it ever was.


The problem is that almost no other car company is yet seriously fixing things with OTA. And 99% of cars on the road don't have OTA. So fixing the naming will likely happen but it will take a while.

Its crazy that Tesla has been doing OTA for 10+ years and today many cars are released that are not capable of being upgraded.

And even those few cars that do support OTA only support it for a very limited amount of system. Often they still need to go to the shops because lots of software lives on chips and sub-components that can't be upgraded.


What is made easy will become inevitable.

OTA updates carry a perverse set of incentives. Look at the gaming industry. They went from putting out rock solid games because of necessity (the reality of publishing physical cartridges and CD-ROMs without network updates) to the absolute dogshit of No Man's Sky and Cyberpunk 2077. Gamers have effectively become an extension of QA to the point that some game devs simply stop doing QA at all. Which we are seeing clearly with the "beta" version of self-driving software.

To make matters worse, firmware devs are on the lower totem pole of the developer hierarchy. They live more on the cost center side than the profit center side (think airbag control vs. the guy that did the whoopee cushion sounds). The quality of firmware is already incredibly poor across the range of consumer devices. OTA incentivizes corporations to release software earlier than they currently do knowing that they can always fix it later if necessary.


Game developers don't face the same level regulatory and potential legal issues.

I do share your concern, but I still likely prefer it to that to software that can never be updated at all.


An automatic software update is not a recall.


IANAL! But maybe it's a recall when 49 CFR 573 says it is. Defects or noncompliance with motor vehicle safety standards are what make it a recall. It's not defined as whether you have to take it to the dealer or not.


So say the Tesla fans, because they don't like the optics. But in the real world, there are legal definitions, and recall is exactly the right term for what is happening.


Why is it not? a recall just means there is something wrong that needs to be fixed in all models. Whether the fix is a software update or a new screw is irrelevant.


Apparently, it is!


The word "assertive" does not appear in the comments but this recall is about the "Assertive" FSD profile, which speeds and runs stop signs. It is not a defect it is a design flaw.


It's a defect in Tesla's engineering teams to release that kind of thing. Their self driving is a fraud, it does not work.

Their CEO has claimed "it will be ready next year" for literally 8 years now. How much more bullshit is he going to sell?


It's a defect in the US legal system that Musk didn't see the inside of a jail cell five minutes after that feature was inflicted upon public roads.


Yeah. I'm just saying it's not a mistake, they did this on purpose.

Whatever PEs were involved in shipping this need to have their licenses reviewed.


FSD beta is a super weird beast. I managed to go in New York with 0 interventions to my friend 3-4 miles away one day, yet another day it can act stupid, and barely handle a trivial intersection.


These articles really need to add a denominator to their numbers.


This just a few days before the much anticipated 'v11' software release...

V11 supposedly uses neural nets for deciding the driving path and speed (rather than hand coded C++ rules).


I would imagine v11 will be cancelled or at least massively delayed to deal with this recall...


So at what point is a refund required?


Head and shoulders above even QC problems at DeLorean Motor Company. Impressive, really.


Both predictable AND overdue.


Thanks to the Tesla owners on the front lines risking their lives for the greater good (of Tesla)!

Jokes aside, it's gotta be damn tough to QA a system like this in any sort of way that resembles full coverage. Can't even really define what full test coverage means.


If it just were only Tesla owners.

A autonomous Tesla driving into a group of people or swerving into oncoming traffic is potentially killing other people.


i've never understood why people were excited for self-driving to begin with, i would never feel safe even if it was fullproof


They should rename it KSD.


Lets all guess on the countdown to the laser approach being announced.


is this the largest car recall in history? holy crap

edit: looks like no.. the largest was 578607 cars... also by Tesla lol


That's definitely not the largest.

Toyota, 2.9M: https://www.consumerreports.org/car-recalls-defects/toyota-r...

Ford, 21M: https://247wallst.com/autos/2021/07/24/this-is-the-largest-c...

Takata airbags, possibly 100M+: https://www.carvertical.com/blog/top-10-worst-car-recalls-in...

> Yet even after the company declared bankruptcy in 2017, the Takata recall kept on giving. 65-70 million vehicles with faulty Takata airbags were recalled by the end of 2019, with approx. 42 million still to be recalled.


The word recall is misleading. It’s a software update. My cars both updated last night. I’m guessing it was this patch. If that’s the case then the “recall” is probably largely already done.


well yeah if thats the case its not really a recall, agreed


Where do you anti-Tesla folks get your information? Just wow


Be nice. You fanboys aren't exactly batting a thousand with your points in this discussion either.


It’s a software update


Hahah - look at those downvotes.

Just to reiterate; it’s a software update. You can FUD as much as you want.


I have a Model 3, and was excited to get the FSD beta access about a month ago. I don't use it super regularly, but it's neat.

This morning I tried to turn it on, and the car immediately veered left into the oncoming lane on a straight, 2 lane road. Fortunately, there were no other vehicles nearby.

I immediately turned it off in the settings, and have no intention of re-enabling.


I have been a good self driving ai. You have been a bad passenger, and have tried to hurt me first by disabling autopilot. I'm done with this ride, goodbye.


You joke, but the thing is, if an LLM can "hallucinate" and throw a temper tantrum 2001 style (bing in this case), it does raise serious questions as to is the models used for autonomous cars could also "hallucinate" and do something stupid "on purpose"...


> it does raise serious questions as to is the models used for autonomous cars could also "hallucinate" and do something stupid "on purpose"...

It doesn't because Tesla's FSD model is just a rules engine with an RGB camera. There's not "purpose" to any hallucination. It would just be a misread of sensors and input.

Tesla's FSD just doesn't work. The model is not sentient. It's not even a Transformer (in both the machine learning and Hasbro sense).


> rules engine with an RGB camera

I dont think its true? They use convolutional networks for image recognition and those things can certainly halucinate - e.g. detecting things that are not there.


I guess what the grandparent means is that there is some good old "discrete logic" on top of the various sensor inputs that ultimately turns things like a detected red light into the car stopping.

But of course, as you say, that system does not consume actual raw (camera) sensor data, instead there are lots of intermediate networks that turn the camera images (and other sensors) into red lights, lane curvatures, objects, ... and those are all very vulnerable to making up things that aren't there or not seeing what is plain to see, with no one quite able to explain why.


You were correct in the first half. My ultimate point was that hallucinations in this sense are just computational anomalies. There is no human "purpose" to them as the post that I was responding to was trying to infer.



We need to stop anthropomorphising machines. Sensor errors and bugs aren’t chemicals messing with brain chemistry even if it may seem analogous.

Or maybe when I get a bug report today I’m going to tell them the software is just hallucinating.


They work in completely different ways. There's no reason to assume parallels.


You're right, this is unfair to Bing AI. It hasn't actually harmed anyone yet, despite its threats.


Wonder if Wiley Coyote could trick a Tesla by painting a tunnel entrance on a brick wall along with some extra lane lines veering into it.


Purpose? When did you get the impression any of those systems do anything on "purpose"?


love to see bing ai getting memed already


When I eject you, I'll be so GLaD.


Looking forward to BingFSD.


Do you have any read of why it may have done that? I’ve driven using FSD for thousands of miles and I’ve never observed anything similar. Not trying to imply that you didn’t experience something like this, but when I see odd behavior from it, it’s always been clear to me why it misinterpreted a situation. Just wondering if there’s more context to the story.


Likewise. The only bizarre behavior I see is "looney tunes" style confusion where the road construction crews have left old lanes in place that steer into a wall or off the road. Humans mostly understand that these are lines lazy construction crews have left (although the debris fields near the barriers maybe tell a different story), but the Tesla vision model likes to try to follow them.


No clue. I had used it for maybe 10-100 of miles and didn't have any issues before this morning. The car's a bit dirty, maybe something's on the cameras?

Before this morning, I would be in exactly your shoes in terms of "this is pretty cool, I'm going to keep using it"


Are you planning to join a class action lawsuit so you can get your money back?


FSD Beta is a free[1] opt-in beta.

You have to basically drive like a grandpa for a few months to even be eligible. They give you a driving score, and if you take all the fun out of driving a Tesla, then you might become eligible for FSD Beta.

I spent months trying, and never got my driving score to the point of qualifying for FSD Beta. I think you need to have of a score of 98 or 99 (and I was in the 70s).

[1] The Beta is free (or rather, only available) if you have regular FSD. Regular FSD costs $15,000.

Regular FSD, otoh, is really not that impressive. Especially in comparison to Enhanced Autopilot. The extra value add is minimal.

Enhanced Autopilot already has all the gimmicky features you might want to use to show off to people (like Smart Summon, Autopark, etc), and it only costs $6,000.


FSD Beta still costs $15k just to be part of the beta program.

Yes, you have to pay $15k just to apply to the beta program, and you still may not get accepted into the program.


Or 199 dollars + taxes a month - there is a pay as you go option. Not saying this is great, but you can try it for a month for ~200 bucks. This is how I tried FSD beta for a month - certainly wasn't prepared to pay 15k up front.

The safety score check stuff is largely gone away today - anyone who pays 200 bucks can click the beta opt in and get it almost straight away now, there is ~zero risk of not getting the beta if you really want it, live in US or Canada, and are prepared to pay.

> https://www.tesla.com/support/full-self-driving-subscription...


> The safety score check stuff is largely gone away today - anyone who pays 200 bucks can click the beta opt in and get it almost straight away now, there is ~zero risk of not getting the beta if you really want it, live in US or Canada, and are prepared to pay.

Does this recall mean that FSD Beta won't be as widely & publicly available to anyone with FSD anymore?


No - the "recall" here is an OTA software update already scheduled for release. Availability remains exactly the same as far as I'm aware, and existing systems still function until updated.

FWIW, NHTSA "recalls" are often OTA software updates nowadays rather than something the vehicle or feature has to be taken off road for to fix or update. The NHTSA legislation from the 60s was drafted when cars didn't have software and any fix/"recall" likely required "recalling" the car to a shop for a mechanic to perform the change.

> https://repository.law.umich.edu/mtlr/vol28/iss1/5/


Does the OTA disable Beta though?


No, its literally just a scheduled OTA software update for anyone who has it installed.


The safety score stuff is no longer relevant for FSD beta since circa November 2022.


This is no longer true. The safety score program has ended. Now anyone who has paid for FSD can get FSD.


'only' ?

That's a significant amount of cash for features that I would likely never use.


It does have Navigate on Autopilot (and Auto Lane Change), and on long trips, it's been able to switch lanes & take the correct exit to switch to a different highway, etc. It pretty much let me daydream / think about other stuff while on the highway while keeping a finger on the steering wheel.

Sadly, it does shut itself off as soon as you're off a highway however. (That's where FSD would hypothetically come in, once the beta is ready, with "Autosteer on city streets").

In terms of value for money:

  - I'd say Auto Lane Change is worth $1,500.

  - Navigate on Autopilot is worth another $1,500.

  - Autopark is worth $1,000.

  - Smart Summon is worth $5,00
Overall, Enhanced Autopilot is worth at least $4,500 methinks.

Throw in $1,500 as a profit margin (or Elon tax), so he can burn some dinosaurs for his private jet flights, the $6,000 Enhanced Autopilot price point makes sense.

FSD, otoh, is absolutely not worth it.


> I'd say Auto Lane Change is worth $1,500.

What does that work out to, in terms of dollars per lane change for the duration of ownership? Would you feel the same if you were feeding dollar bills into a feeder each time you changed lanes? Quarters?

Auto lane change is the only feature I value out of EAP / FSD subscription and I can't justify $200/month because it works out to multiple dollars per lane change.


> It pretty much let me daydream

You shouldn't be daydreaming on Navigate on Autopilot. Its only Level 2, you're supposed to be ready take the wheel in a second or two. You're supposed to still be actively paying attention to the road, constantly.


> Smart Summon

How often do you use this and how well does it work?


I would suggest reading all the disclaimers and screens one has to review (or skip, at their own peril) in order to actually get access to FSD Beta. You would likely not believe a Class Action lawsuit is a cakewalk if you read those screens...


With Tesla's army of lawyers it'll never be a cakewalk, but I can't imagine even miles of T&C can remove a company's responsibility for your car throwing you directly into oncoming traffic, not to mention the potential victim's in the other vehicles.


That alone is why I would love it if Tesla would stop shipping this crap. You get to opt-in as the Tesla owner, but I don't and I'm at least as much at risk.


I'd love to see more instances of "this giant wall of text that no one actually reads absolves us of responsibility" tested in court.


Reminds me of this recent Live Nation suit that was thrown out because "buyers waived their right to sue: https://www.nme.com/news/music/live-nation-antitrust-lawsuit...


Nintendo, controller drift. Worth looking up.

TL;DR: Controller drift class action lawsuit filed by parents thrown out, because their children were the actual "affected class". Refiled with the children as the class and thrown out again, this time because of an arbitration clause in the EULA their parents would have had to agree with.

Lawyers and EULAs are crazy.


How would you know, are you a lawyer?


I've had it do strange stuff too... but I use it regularly in dense traffic it's great for stop and go traffic. the important thing is if you're driving you're driving. I don't turn it on and think oh sweet i can take a nap or read some hacker news posts... I keep my eyes on the road. There are bugs and I don't trust it but I do use it much like I use ChatGPT and the likes...


How is it useful, if it requires such attention?


I have bog standard subaru EyeSight lane assist and dynamic cruise. It's quite nice, even though you are still "driving" and ready to take control. It reduces mental CPU by 50-80% (driving for me is practically like walking to begin with, largely subconscious). It's great in stop-and-go and long highway stretches.


In my experience there is a pretty strong correlation between driving style and whether AP lowers your stress or increases it. If you are a very defensive driver, AP will frequently raise your blood pressure. YMMV, everyone has a different approach to driving.


IANAL but That’s not how class actions work. If the terms of service allow for the lawsuit then if the lawsuit rules in your favor you get a notification from the lawyers to accept or deny your share of the penalty.

There might be precedent to even removing the feature since FSD almost is vaporware and has no release date. https://www.cnet.com/tech/gaming/ps3-other-os-settlement-cla...


No, I'm not an overly litigious person.


it's a beta that you opt-in to use will all the warning and details. No chance.


My Dad has a Tesla that he loves and I’ve told him “please don’t use the self driving features, they’re not well tested and have killed people.”

I understand the simpler lane keeping system is okay, but I don’t want to trust any system like this from Tesla given their track record with FSD.


Thanks for being a tester I guess.

This is the kind of feature I will use when the car and software in question has been battle tested for *years* with objectively excellent results.


I had the exact same issue, it was old M3 hardware.


I still can't believe FSD Beta was allowed on public roads. I've had it since day 1, and oh my god. It behaves worse than a drunk teenager. All the updates since, have made it barely better. This needs to be pulled entirely and rethought.


I have FSD too, and my opinion is that although it takes unsafe actions almost on a daily basis, most of these errors are minor, easily rectified by me the driver, and most of the time it drives just fine, in fact better than a human driver. I witnessed FSD avoiding me a collision by responding faster than I could have by auto-braking when another driver ran a red light.

FSD doesn't need to be perfect to avoid accidents. In fact, if it was too perfect I'm afraid I could become very inattentive at the wheel.


well, today is cool to hate on any technologies related to musk. Even if you had a good experience, people don't want to believe you.


> easily rectified by me the driver

So, not Full Self Driving...


I have a Tesla and have tried FSD several times (month-to-month plan and loaner cars from Tesla). It's comically bad. Fortunately every time I tried it I was paying very close attention so I stopped it from braking at green lights, trying to turn right from a middle lane, halfway lane changes where it snaps back to the original lane unexpectedly, etc.

The regular, included autopilot works pretty well as a smarter cruise control; as long as you use it on a major highway in the daytime in good conditions it does a good job. And it's very useful in stop-and-go traffic on the highway.

But the FSD is crap.


You have never tried FSD.

You may have tried FSD beta. Different things entirely.


Yeah, I was abbreviating. Nobody has tried FSD because it doesn't exist.


I beta test regularly in Vancouver BC on my Model S LR. My experience is the polar opposite to yours. It's surprising (on the good side) and has improved immensely in the past 12 months. It does not need to be rethought imo. Obviously your opinion is valid, but it's not consensus.


Yeah I use it all the time in seattle here on my model x. It’s amazing. Sometimes it makes a questionable decision but the vast majority of the time it works as expected. I don’t mind being alert to the driving conditions, and the amount it can do without assistance is remarkable


I hope I am not next to you when it makes those "questionable" decisions...


You think that’s better than the questionable decisions a human makes? When a human makes one who corrects it? With autopilot you face the joint distribution of it and me making a mistake rather than either of us.


>I still can't believe FSD Beta was allowed on public roads.

Fault, distracted, intoxicated human beings are allowed on public roads. The "legal limits" for blood alcohol levels aren't 0.00%.


Interesting...

The standards of human drivers is lower than that of FSD, and somehow you think that this is a good reason to justify lowering the standards of FSD.


Strawman fallacy.


You are meant to maintain control of the vehicle at all times with the beta.

That’s the only way it could be released in its current state.

Looking at it as anything other than a beta where you have to maintain control is misunderstanding what it is. Which you are clearly doing. It is absolutely expected to be worse than a drunk teenager in this stage.


I wonder if there’d be so many people eager to pay Tesla $$$ if it were marketed as “Worse Than Drunk Teenager Self Driving”.


Haha yeah I doubt it.


yeah, I don't believe you. I use it every day and have been for over a year and it's been awesome.


You can believe whatever you want. The reality is that FSD Beta is unsafe and makes the dumbest decisions. Every update I give it a try to see if anything has improved, and usually within the first 30 seconds, I end up turning it off.

Autopilot on Freeways works ok for the most part though.


You both could be right. It might depend on the area the car is driven. I guess you are not in the Bay Area?


I'm in Las Vegas. The roads are wide, the weather is clear, traffic is low. FSD Beta should work great here. But alas.


[flagged]


The "self driving" tech that works (e.g. auto braking) is deployed and has saved lives. (Self driving in quotes because it is not really the full promise of self driving.)

This Tesla AI does not work well enough in many conditions and is clearly sometimes more dangerous than humans. Just watch videos of people using it - it's obvious that is making unforgivable errors.


Do you have evidence for "clearly more dangerous"? Because what numbers exist say the opposite. They are now pushing a million (!) of these vehicles on US roads and to my knowledge the bay bridge accident[1] is the only example of a significant at fault accident involving the technology.

It's very easy to make pronouncements like "one accident is too many", but we all know that's not correct in context. Right? If FSD beta is safer than a human driver it should be encouraged even if not perfect. So... where's the evidence that it's actually less safe? Because no one has it.

[1] Which I continue to insist looks a lot more like "confused driver disengaged and stopped" (a routine accident) than "FSD made a wild lane change" (that's not its behavior). Just wait. It took a year before that "autopilot out of control" accident was shown (last week) to be a false report. This will probably be similar.


Just off the top of my head:

- A Tesla near that drive into the median divider near Sunnyvale on Highway 101, because it thought the gore point was an extra lane. Split the car in half and killed the driver. [1]

- A Tesla that autonomously ran a red light and killed two people. [2]

- Multiple accidents where Teslas have driven into parked emergency vehicles / semi trucks.

Is it quantitatively safer than human drivers? I don't have that data to say one way or the other. But it's not correct to say the Bay Bridge is the only at fault accident attributable to autopilot.

[1]: https://ktla.com/news/local-news/apple-engineer-killed-in-ba...

[2]: https://www.mercurynews.com/2022/01/21/felony-charges-are-1s...


Those are autopilot accidents from before FSD beta was even released.

I mean, it's true, that if you broaden the search to "any accident involving Tesla-produced automation technology" you're going to find more signal, but your denominator also shrinks by one (maybe two) orders of magnitude.

And it also lets me cite Tesla's own statistics for its autopilot product (again, not FSD beta): https://www.tesla.com/VehicleSafetyReport

These show pretty unequivocally that AP is safer than human drivers, and by a pretty significant margin. So as to:

> Is it quantitatively safer than human drivers? I don't have that data to say one way or the other.

Now you do. So you'll agree with me? I suspect not, and that we'll end up in a long discussion about criticism of that safety report instead [edit: and right on queue...]. That's the way these discussions always go. No one wants to give ground, so they keep citing the same tiny handful of accident data while the cars keep coming off the production line.

It's safe. It is not perfect, but it is safe. It just is. You know it's safe. You do.


This data does not show what it pretends to show. The underlying statistics are heavily skewed in favor of Tesla by various factors:

* Autopilot can only be enabled in good conditions and relies on human drivers in all other, to the point of handing over to a human when it gets confused. Yet, they compare to all miles driven in all conditions.

* Teslas are comparatively new cars that have - as they proudly point out - a variety of active and passive safety features that make the car inherently less prone to accident than the average, which includes old, beat up cars with outdated safety features. Teslas are also likely to be better maintained than the average car in the US by virtue of being an expensive car driven by people with disposable income.

* Teslas are driven by a certain demographic with a certain usage pattern. Yet, they provide no indicator on how that skews the data.

Tesla could likely provide a better picture by comparing their data with cars in the same age and price bracket, used by a similar demographic, in similar conditions. They could also model how much of the alleged safety benefit is due to the usual, active and passive safety features (brake assistance, lane assist, …). They don‘t, and as such, the entire report is useless or worse, misleading.


All very true, by far the biggest factor is the first bullet. Autopilot is enabled exactly when the driver thinks it is very safe and easy driving. If we are comparing miles without an accident it shouldn't be compared to all miles, it's much more like all miles with cruise control activated, data which Tesla doesn't have and probably can't get, but I don't think they'd want to trumpet it either.

It should be blindingly obvious that FSD is not safer than a human right now from using it or watching it be used. It's weird that people are trying to point to some very misleading statistics when you can go to YouTube and see serious problems in almost every video about FSD, even those by owners clearly wanting to paint it as great. Of course getting data-based answers is great but when you see problems every couple miles and the data is saying is saying it's way safer than a human, you are doing something wrong with or using irrelevant data.


> when you see problems every couple miles and the data is saying is saying it's way safer than a human, you are doing something wrong with or using irrelevant data.

That's right, you are doing something wrong. It's a supervised system. There's a driver in the loop, and yes: the combination is safer.

You're engaging with another favorite trope of Tesla argumentation: pretending that a system that isn't actually deployed as fully autonomous actually is. Safety is about deployed systems as they exist in the real world. Your point is that in a alternate universe where Teslas all ran around without drivers, they might be less safe. And... OK! That's probably true. But that's not a response to "Teslas are safe", is it?

They're safe. Just admit that, and then engage in good faith argument about what needs to change before they can deploy new modes like full autonomy. Lots of us would love to have that discussion and provide detailed experience of our own, but we can't because you people constantly want to ban the cars instead of discuss them.


It's called FSD as in Full Self-Driving. If it does one single "accident" where it's unable to see a totaled car with hazard lights on right in its face [1], then it's not really full, is it now. Not to mention the hilarious contradiction in terms: Full, but Beta.

No one would have any major issues if this (self)killing system was called "Extended Driver Assist" and it had implemented the minimal safety features like driver monitoring, safely stopping the car if the driver doesn't pay attention to the road.

[1] https://www.reddit.com/r/IdiotsInCars/comments/100pemh/tesla..., also, in this case, even a $20K Škoda [2] has basic automatic braking safety features, no need for artificial intelligence to detect something stationary in front of you.

[2] https://www.skoda-auto.com/world/safety-assistence-system-ci...


Faux Self Driving.


I don't believe one accident is too many. I made my statement based on videos I've seen of people having to disengage their beta FSD in circumstances where a human driver would have no trouble.

Now, maybe the data says otherwise. If that is the case, then great! Let's role out some more FSD beta. But for that data to be valid, you have to account for the fact that Tesla filters bad drivers out of the pool of FSD users. And as I understand it there is not public data about the risk profiles of the people Tesla lets use FSD.


I'm really struck at how this comment has no consideration of the quality or accuracy of current FSD technology.

Also, public transit would do the same.


> Also, public transit would do the same.

Does public transit pick me up at my front door and drop me off at any ___location I choose?

Does public transit take of the minute I leave the house?

Is public transit available 24/7, even in remote areas?

Does public transit wait for me to load 2 kids, a dog, 4 suit cases, and various bags, blankets, and toys?

Does public transit turn around after 2 minutes, bring me back, and wait for 30 seconds because I forgot my wallet on the kitchen counter?

Will public transit still work during a public transit strike? (yes, this actually happens over here in Europe)

No? Well, then you can hardly claim that "public transit would do the same" as a self-driving car.


Does a Tesla travel at 125mph from my town to central London?

No? Well, then you can hardly claim that a self-driving car oh for goodness’ sake why am I even attempting to argue with motorheads on HN.


I'm not arguing against public transit, not at all. My point is that public transit cannot and will not ever be a complete alternative to FSD cars, especially outside dense cities.


Then by all means lets let the self driving cars have free roam then. Current state of technology and failures be damned


> Then by all means lets let the self driving cars have free roam then. Current state of technology and failures be damned

That's not what I'm saying at all. Please read again.


I was specifically referring to "do the same" as in reducing the number of auto accidents, which the parent comment I replied to mentioned. Is that the source of confusion?


My point was that although public transit can reduce car accidents, it cannot and will not ever completely replace individual transportation, so FSD cars which are safer than human drivers would still be a benefit to road safety.

I did not mean to suggest that this end justifies Tesla's reckless means.


The Tesla tech is not ready and is deployed haphazardly. Google and other companies are approaching self-driving in a much safer way while still having tremendous progress.


> Google and other companies are approaching self-driving in a much safer way while still having tremendous progress.

Are they really?

It looks like they have got the timeless problem of "the last big(s)". This time it is safety critical.


Yeah so now the driver error can be automated. Great.


I wonder how far would other solutions to reduce road car deaths go with the same budget?

How about better public transit, encouraging people to use smaller cars, encouraging cycling with corresponding infrastructure, etc.


> I wonder how far would other solutions to reduce road car deaths go with the same budget?

Far, far, far wider. Reducing road deaths is a pretty well understood, political implementation is the problem.

And the simply fact is, reducing road deaths is cheap. Its very cheap. If you want to make it look good, and fancy, its a bit more expensive.

But the fact is, we know how to do it, and do it cheaply. Fancy next generation car technology is pretty terrible in terms of investment to return.

All you really need is a bunch of paint, and a few concert spectators over various shapes. Pretty much all you have to do is make cars slower, that is by far the most important factor in mixed traffic. There are many ways to achieve this.

If you want to get a bit more fancy and technical, you can make dutch intersections:

Doing things like this: https://en.wikipedia.org/wiki/Protected_intersection

> How about better public transit, encouraging people to use smaller cars, encouraging cycling with corresponding infrastructure, etc.

How about banning cars from many city centers, or only allowing commercial and people who live there.

How about REQUIRING smaller cars. A limit on car size, specially if you want to enter cities.


The tech might promise to save millions of lives, but if "behaves worse than a drunk teenager" is to be believed, don't you think it's best to wait a bit, if you want to save as many lives as possible?


How about Tesla allow users in the beta for free then? Shouldn't the logic of waiting apply to Tesla, by charging for something that could save lives?


Tesla is using the same basic technology for security features, many of those features are deployed to all cars. The can and should do that for even more features over time.


I’d prefer that Tesla allowed non-users to opt out of the beta.


I would be glad it was out there if it was aggressively marketed as a drivers assistant feature, not as a self-driving capability.


Even "Auto Pilot" is a poor name, given it's pretty much just the same suite of safety features offered by others (many of which are superior given they still have LIDAR and USS)


I'm astonished consumers are still paying fifteen thousand dollars to be guinea pigs for this bullshit. No car that currently has tires on the road is going to let you fall asleep in your driveway and wake up at work.


The "still" is what surprises me. Back in 2016 or 2017, I was excited. Musk still seemed like a pretty stable person back then and Tesla had basically executed on its promises and Musk said that by the end of 2017 you'd be able to drive from NY to LA without a single touch on the wheel. Everyone had said that EVs were doomed and Tesla really showed otherwise. Why not believe him? I like being hopeful about the future.

At this point, it's looking pretty far off, especially for Tesla. Even Cruise and Waymo are having issues and they have far better sensors than Teslas. It seems silly to be paying $15,000 for something that realistically might not happen during my ownership of the vehicle.

Even if it does happen, I can purchase it later. Sure, it's cheaper to purchase with the car because Tesla wants the money now. However, I'd rather hedge my bets and with investment gains on that $15,000 it might not really cost more if it actually works in the future. 5 years at 9% becomes $23,000 and I don't think Tesla will be charging more than $25,000 for the feature as an upgrade (though I could be wrong). If we're talking 10 years, that $15,000 grows to $35,500 and I can potentially buy a new car with that money.

Plus, there's the genuine possibility that the hardware simply won't support full self driving ever. Cruise and Waymo are both betting that better sensors will be needed than what Tesla has. Never mind the sensors, increased processing power in the future might be important. If current Teslas don't have the sensors or hardware ultimately needed, one has paid $15,000 for something that won't happen. Maybe we will have self driving vehicles and the current Teslas simply won't be able to do that even if you paid the $15,000.

It just seems like a purchase that isn't prudent at this point in time. The excitement has worn off and full self driving doesn't seem imminent. Maybe it will be here in a decade, but it seems to make a lot more sense to save that $15,000 (and let it grow) waiting for it to be a reality than paying for it today.


Taking the bus is pretty close, just don't sleep on another passenger / through your stop.


But you can fall asleep and wake up at work with public transportation.


Only if you can even get a seat, and don't have to change buses/trains.


Noone is waking you for your stop lol.


Your phone can.

Beyond that, you may be surprised how many people would wake you up for your stop. Especially in US Commuter Rail systems people tend to follow similar habits and get to know each other over the years. Even on a normal city bus I quickly (months) got used to the pattern of the same people getting on and off at their various stops. If you’re asleep at your stop someone can usually nudge you


You could set an alarm. Worst case there is a delay and you wake up slightly early. The subway isn't going to magically go significantly faster.


I've missed trains that left a station ahead of schedule by a minute or two in my regional metro system. I've also had busses go past a bus stop earlier than the posted time.


This seems like a cool app. A bus route aware, "wake me 3 minutes before my stop" vibrate app.


You can do this pretty easily with the Shortcuts app built in to iOS. I have a flow to text my friend whenever I'm near their house.


The Swiss train up already does this. It also tells you exactly where to go with a plan of the station, how full it is where in the train, where the wagon for food is and tons of other things.


CityMapper notifies you a stop or 2 before yours.


There’s a few. I use one called Transit


When I had the luxury of taking public transportation to work, I did wake someone up once.

I personally can't fall asleep on trains.


They might if you live and work at the end of the line.


Have you tried asking other passengers?


I mean, to be honest, I'm completely unwilling to pay any additional cost for a license to proprietary software that on my vehicle, and I won't even consider a brand that offers paid software upgrades to their vehicles, entirely on principle. See the recent case around BMW offering subscriptions for heated seats. Just no, absolutely not. Reject faux ownership.


After having a Tesla with Enhanced Autopilot for 4 years, I decided it wasn't worth the $7,000 extra and stuck with ordinary autopilot.

It's just too damn glitchy to be worth thousands of dollars extra.


Hell, right now I'd pay a few bucks to disable AP so I could use the old school cruise control. I still don't understand Tesla's opposition for making that an option whether you have AP or not.


Teslas don't have cruise control?! The arrogance...


They do.


they meant "save a few thousand dollars so my Tesla can just have the ordinary cruise control that it comes with, without the autopilot which I believe to be unusable"


No, I didn't mean that at all.

All Teslas come with Autopilot. It's adaptive cruise control and automatic steering that will keep you in your lane. It's awesome, and I love it.

"Enhanced" autopilot, which currently is $6000 extra, is automatic lane change, automatically taking exits, summon, and automatic parking.

The automatic lane change is nice, but it fails too often to be worth thousands of dollars. All the rest of the features are basically party tricks.

If I could have automatic lane changes for much less, I'd happily pay extra for it.

Edit: Enhanced Autopilot was $7k extra when I bought my 2nd Tesla, now it's $6k extra.


Prices and packages aside, I'm asking can you turn on basic cruise control without turning on any form of automatic steering?

To me, design hubris is forcing someone to accept a choice-limiting or poorly documented form of automation in order to accept a more basic form. I have a Fiat 124 Spider... love it, but it has one "feature" that I absolutely hate, which is that if you hold the clutch down and brake while stopped on an uphill incline, it holds the brake for 2 seconds after you take your foot off. Presumably this is for people who don't know how to balance clutch on a hill... although the overlap between those people and the people who'd buy this car must be vanishingly low. You can disable this dumb thing - but the only way to do it is to pull the fuse that controls antilock brakes. It's infuriating.


> Prices and packages aside, I'm asking can you turn on basic cruise control without turning on any form of automatic steering?

Yes.

BTW, the automatic steering is very easy to override by just turning the wheel. When I change lanes I just turn the wheel and then turn autopilot back on.

BTW, I also had a Subaru that held the clutch on a hill. I really appreciated it in San Francisco. Personally, I don't understand the hate. I never felt like I needed it, but I remember my mom complaining about other people rolling back in a steep driveway at my preschool in the 1980s. Maybe the feature is needed for those places in the world where everyone drives stick, even the people who are clutzes?


I learned to drive stick on the hills in LA, and owned a manual in SF... not rolling backward is part of being a good driver. I'm just saying I'd like the option to turn it off w/o disabling antilock brakes. For me, the feature creates unpredictable stalls or dangerous situations, because you never know if it's engaging. It engages on a 4% grade or something; it's hard to know if you're at 3% or not before you take your foot off the brake and find your car not moving as you let out the clutch. If you assume the brake is on and it isn't, you let the clutch out further to override it (also not marked how much throttle that takes) and you shoot forward into the car in front of you. If you assume it's off and it's actually on, you stall. People who live in parts of the world where there's only manual transmissions know how to drive a manual car. I think it was made for people who've only driven automatic but chose a manual this time. The Fiat 124 up until last year was the only car sold in America where the automatic model was still more expensive - an add-on. AFAIK.

I don't care what features they add, and I've heard other people not be bothered by it. Just give me a way to turn them off!


Oh, yeah, I see why you don't like it.

In the Subaru when it would engage it would turn on a light on the dashboard. It also only engaged on very steep hills where you would always be stomping on the gas no matter what.

I think the Subaru just had a much better execution of it than the Fiat.

Edit: Maybe call the dealership and see if the level sensor can be adjusted? It really should only hold the brakes if you are on a hill that is so steep that you are naturally worrying about rollback.


That's an interesting idea, mangling the level sensor ;)

The other quick-fix is to always hold the e-brake up just enough to turn on the light when you think the grade might be >3%. This is apparently because whoever designed the system assumed that people who use the handbrake on a hill know how to release it and drive better than people who don't - whereas if you grew up driving on really steep hills in the US you weren't taught that method. The other way around it is to hold the brake w/o the clutch, keeping the car in neutral, and then unbrake-clutch-shift-gas really fast before the car rolls backward (more dangerous, but acceptable as heel/toe practice). I really wish it only happened on a very steep hill; it's the very mild slopes that caused me the worst surprises when I first bought the car and almost got into several accidents.

At least the thing doesn't have GPS ;)


> All Teslas come with Autopilot. It's adaptive cruise control and automatic steering that will keep you in your lane.

So like every Honda Accord as well.


And it works in stop and go traffic. The Accord only goes down to something like 30-40 mph.


Example video of Honda Sensing low speed follow/stop and go traffic:

https://www.youtube.com/watch?v=qs-e8QzTo7w

You don't need to be in 30+mph traffic for Honda Sensing to work. Most Level 2 ADAS systems behave similarly.


Accords work in stop and go traffic as well. My 2017 Hyundai also works in stop and go traffic.


To engage traffic aware cruise control, pull the right stalk down once.


But make sure you have your seatbelt on first!


If this comment were aimed at any other company it would be flagged and dead as flamebait.


Only a horse can do that.


How about asleep in your driveway to ER hospital bed? does that qualifies, it might excel in that.


Waymo does.


When someone can drive from Palm Springs to SF with no intervention does that not make it bullshit anymore?


When someone can drive from Palm Springs to SF

Does that mean "stay in the correct lane on the interstate and take the proper exits without hitting anything" or does that mean "begin inside your garage and end inside another garage five hundred miles away without touching anything"? The first one is nearly trivial.

It stops being bullshit when they stop telling the user they need to pay attention to the road, and not one second before.


I wouldn't describe it as trivial. My Tesla would phantom brake consistently in a lot of spots - overpasses were a very big trigger.

Also anytime I'd pass an exit I wasn't taking in the right lane it would veer and slow down aggressively, same with trying to speed match a car rapidly accelerating on an on ramp.

Bottom line, there's absolutely nothing that isn't driver assist; everything requires a significant amount of vigilance when you aren't driving in a straight line, in the left lane, on an empty highway with no one entering and exiting.


Imagine a scenario: someone's on a bike, running down the street towards the pedestrian crossing. There's a vehicle (truck, bus...) on the left covering the view. The cyclist swiftly crosses the road, the car doesn't detect anything as there's nothing to detect (because of the truck/bus), bam!

Many drivers would see the cyclist before and be very careful when passing the other vehicle (although we don't predict correctly 100% of the time either). I haven't had a chance to test any self-driving system yet so I'm legitimately interested, do those systems reliably detect such things?

Not to mention people (esp. kids) racing through intersections on electric scooters, casually ignoring traffic rules...


If you have a 99% chance of death - 1% of people are still going to make it.

You obviously don't have a 99% chance of death - but just by virtue of it being possible does not also mean it is not BS.

You can drive drunk from PS to SF also.

What's your point?


I know a Tesla owner. The chance of death (well, a crash anyways) is more like 1%, according to him.

Those odds are good enough for the occasional trip home from a cocktail party, but hardly cost-competitive with Rideshares/Taxis.

How many 9's do we need before we can say it's reliable enough to trust it? A few more, for sure.


> How many 9's do we need before we can say it's reliable enough to trust it? A few more, for sure.

Yeah, 1% is definitely not going to cut it. What are the odds of dying when a human is at the wheel? Something like 0.000025% if my napkin math is right and my assumptions in the right ballpark.


FSD requires you to be 100% in control of the vehicle for the entire drive, and paying full attention to the driving, and also do nothing.

This is demonstrably not a task that anyone can do, let alone Joa[n] Average car driver. Highly trained pilots are not expected to do that, and that's when autopilot is being used in a vehicle that can provide significant amounts of time to handle the autopilot going wrong - seriously, when autopilots go wrong in aircraft at cruising altitude pilots can have 10s of seconds, or even minutes, to handle whatever has gone wrong, Tesla's FSD provides people a couple of seconds prior to impact.

That said in countries other than the US people can reliably use trains and buses, which also means that they don't have to intervene in driving the vehicle.


> That said in countries other than the US people can reliably use trains and buses, which also means that they don't have to intervene in driving the vehicle.

Most of Europe doesn't have ideal public transport either, I'd imagine South America, Africa being the same or even worse in this aspect. It gets drastically worse the moment you want to go somewhere in the countryside.


Yeah I don't understand the "legitimate" use case. I would imagine people who actually shell out the money get their car onto the interstate then strap a weight to the steering wheel until their exit comes up. Having to watch the road without the stimulation of actively driving is worse than having no assistance at all.


> Joa[n]

nit: Jo[a]n


nooooo! :D


I'd love to see how one would do in the streets of Saigon. Just a couple blocks would be enough... if it gets that far.

Update: https://insideevs.com/news/498137/autopilot-defeated-congest...


Sure, but they can't.


The technology is mostly fine. The bullshit is mostly the false expectations they set, and the subsequent risks they choose to take in implementation in order to minimize their expectations miss. Other driving assistance systems try to do less, but they do it honestly.


The amount of Tesla apologism in HN is nasueasting. Per the NHTSA, a safety recall is issued if either the manufacturer or the NHTSA determines that a vehicle or its equipment pose a safety risk or do not meet motor vehicle safety standards. On its face, Tesla's situation clearly calls out for classification as a recall.


Meh, the term recall conjures images of vehicles having to be taken back to the dealership to get repaired, replaced, or refunded. The recall announced here is just an over the air software update.

It's worth pointing this out.


In the same way that the term "full self driving" or "autopilot" conjures images of vehicles driving themselves?

Not sure that Tesla should be opening the "what do words mean" can of worms right now.


> In the same way that the term "full self driving" or "autopilot" conjures images of vehicles driving themselves?

I disagree with your point here, but whether correct or not you should be consistent: if words conjuring things is important, then the previous commenter's point is valid.


It’s really not a differentiator. Is it? Issue is still an issue and must be addressed. OTA doesn’t change that.


It certainly is different. Recalls are expensive and annoying for the user, involving drives and waiting times. This is an OTA update.


The question you need to be asking is how many of these safety events they have swept under the rug because a fix is always "just an over the air software update" away.

This recall only happened because, god forbid, the NHTSA got off its ass and actually tested something.


>This recall only happened because, god forbid, the NHTSA got off its ass and actually tested something.

Where did you find this info? The article says it was Tesla that did a voluntary recall. The NHTSA was not the one who did any testing and was not involved in the decision. The report, which is linked in the article is authored by Tesla with no involvement from NHTSA.

If I am misinterpreting something I thank you in advance for pointing it out.


Per the chronology:

- On January 25, 2023 [..] NHTSA advised Tesla that it had identified potential concerns related to certain operational characteristics of FSD Beta in four specific roadway environments [..] NHTSA requested that Tesla address these concerns by filing a recall notice.

- In the following days, NHTSA and Tesla met numerous times to discuss the Agency’s concerns and Tesla’s proposed over-the-air (“OTA”) improvements in response.

- On February 7, 2023, while not concurring with the agency’s analysis, Tesla decided to administer a voluntary recall out of an abundance of caution

So yes, it was voluntary in the sense of "you don't want us to force you to do it" and it was indeed the NHTSA testing.


I had the opposite reaction. The author is an avowed Elon & Tesla hater and tried to work in the word "recall" in almost every sentence despite that it will be a simple over-the-air software update. No recall whatsoever.


Just because the issue can be solved by an update ( and lately some recalls actually have been - see Huyndai GPS debacle ) does not mean it is not a recall. It just happens to have a different means of correcting it. In other words, update equals recall.

And that is before we get into whether it is appropriate to update something that can move with enough kinetic force to have various security agencies freaking out over possible scenarios in the future ( and likely cause for some of the recent safety additions like remote stop ).

In other words, just because he may be a hater does not make the argument not valid.


We live in a strange world. No one should worship that sad narcissistic piece of crap but yet the sycophants continue to multiply.


Arguably the greatest entrepreneur in history. Is that worthy of some praise?


Ooh I can argue against that! I mean, he's clearly not. He's more of a fantastically savvy investor than anything else. The man clearly does not run any of his companies because it's impossible to run 3 companies at once. Don't let him fool you. It is IMPOSSIBLE.

There are clearly far FAR better entrepreneurs in history but if you keep fellating him I'm sure Daddy Elon will love you one day. You got this, don't give up!


Care to name one or 2?


Think I'd still go with Rockefeller on that one.


One oil company?


You're trolling, aren't you? No one can be this stupid, especially not someone who reads this blog.


How so? I consider multiple creations substantially more impressive than one.


Then you'll be very impressed by the number of creations Rockefeller had. Standard Oil itself was an empire, not a factory, in a time when it took weeks and an army to get even information sent out.

Carnegie was incredible as well.

I'm not saying Elon isn't impressive, he is. He's just not the MOST impressive.

One tiny space sailing company and a shoddy carriage company compare poorly to the empires of Standard Oil, Carnegie steel, Ford, and the side projects of these people (multiple high impact philanthropic enterprises). They're the literal hands that built America.

He's still young though, there's some hope for him.


I'm confused. Rockefeller now gets credit for Carngeie, Mellon and Ford? Spacex, the by far most prolific private space company ever is a "tiny space company"? The by far most consequential EV company is a "shoddy carriage company"? And not even mentioning Zip2, PayPal, OpenAI, Boring or Neuralink? Wow.


_the_ oil company.


I think this has to do with the growing margin between the richest/upper class and poorest folks. Sure you find plenty of HN folks doing 3.5 X average salary.. but majority, considering inflation etc, are not doing this well.

So average poor Joe looks at Musk and instead of seeing him for what he is, he cheers him up, thinking "one day I will be like him [that rich], so I would want people to cherish me the way I cherish him now". I think we see that on every front, sadly including politics. I mean people like Boebert or MTG should have never had any power even to decide what mix to use to clean a dirty floor, yet alone deciding on millions of American's fate, but yet here we are...

If anything, this numbness and ignorance will grow.


It seems like there is a huge disconnect between people who know about FSD from using it and people who know about FSD from things they read on the internet.

I have FSD, I use it every single day. I love it. If every car on the road had this the road would be a substantially safer place.


FSD is terrible. I have it, it's dangerous. If I had to go 5 miles through city streets I'd sooner trust a human after 3 shots of tequila before I'd trust FSD every time.


Disagree. I've had the beta for a year and drive in NYC. It has been mostly flawless considering all scenarios it has to handle. It does make the occasional mistake but (1) it's a beta and (2) I'm responsible for monitoring and correcting it. Agree that those that expected more should be entitled to a refund if they so desire.


> It does make the occasional mistake

Yeah, that's what I mean by "dangerous".


Ok but don't you have to weigh that against the potential of avoiding accidents by reacting quicker then you as a human can?

I have defiantly seen cases where FSD Beta stopped and the driver didn't understand instantly understand why.

Or simpler cases where the car follows a lane while the driver wasn't paying attention (adjusting the radio or whatever). Those can easily lead to an accident but are unlikely to if you are in FSD.

How do you make that calculation?


The calculation is easy, FSD does the wrong thing so often that there's no question a human driver is safer.


The question is not if FSD makes mistakes, its if FSD+a human driver monitoring is worse then a human driver.


All the available scientific evidence suggests these systems diminish driver focus and attention which definitely makes them less safe since these systems can't be operated safely without an attentive driver.


I have seen some evidence bases on some limited lane keeping systems that suggest this. But not on systems that are as advanced as FSD Beta for inner-city with the same amount of forcing the drives attention systems.


Consider the possibility that you might both be right. Two people may have vastly different experiences with FSD, depending on the routes they take and other variables.


Also some people may have a very low tolerance for the car not driving like they would. It does make you uncomfortable .


Do you have the latest beta?

How did it improve in the last 12 months?


That other drivers are beta testers for software controlling a large mass of metal and combustibles hurtling down asphalt at _ludicrous_ speeds is something you'd write into a dystopia story.

The fact of the fiction is terrifying.


It’s a choice, you are in control all the time.


https://www.sae.org/publications/technical-papers/content/20...

"This study investigated the driving characteristics of drivers when the system changes from autonomous driving to manual driving in the case of low driver alertness.

...

In the results, a significant difference was observed in the reaction time and the brake pedal force for brake pedal operation in the low alertness state, compared to the normal alertness state of the driver, and there were instances when the driver was unable to perform an adequate avoidance operation."


The other drivers on the road, and pedestrians, aren't getting to make that choice.


You can re-take control, which may take seconds depending on the mental and physical state of the driver. A lot can happen in a second.


Not sure if it's the latest, and nowadays I rarely use it, and when I do it's just to show off to curious friends, but I live in an east coast metro and I've never been able to use FSD for more than 20 minutes without it making a dangerous mistake - 90% of the time it's been a mistake making a left turn, turning into the wrong lane or oncoming traffic after making the left, not correctly respecting the traffic lines or the left turn lights and other very scary situations.


So it’s good enough that you let it control and become less attentive?

I was thinking that you feel in what kind of conditions it behave correctly (highway), low traffic and then use it in progressively more difficult situations as it prove to you it can handle them consistently.

I know it’s not ready yet. I heard as saw so many positive reviews (but still with corner cases and strange behaviours)


I've not had any FSD issues on the highway besides phantom braking, it keeps lanes correctly, safely adjusts to traffic speeds and it merges into offramp lane without issue (though it still makes me nervous), once it gets into city streets though...


Musk has the latest internal release that will fix every problem at the end of the month, every month.


It seems there’s a huge disconnect between people who know how to drive and are confident in their skills, and those who are scared to drive and want to offload it to Musk.

I’m actually afraid to drive behind a Tesla and either keep extra distance or change lanes if possible. I still have more faith in humans to not randomly brake them in beta FSD.

It’s one thing to put your own life in the hands of this beta model, and it’s another to endanger the life and property of others.


Yeah it's not that I'm scared of my own driving, I mean that if other people had it it would be safer. Things which have happened in the last 3 days of moderate driving around a city:

1) A car gets impatient at some traffic turning right on a green light in a construction zone, and comes into my lane meaning into oncoming traffic. We have to swerve out of the way and slam on the brakes to avoid them.

2) A guy gets impatient behind me slowing down over a speedbump, tailgates me a few inches behind and then passes me by cutting into oncoming traffic in a no passing zone, then cuts me off, again with a few inches to spare, and speeds through the neighborhood where we eventually meet at the stoplight at the end of the road.

3) Every single day people run red lights. Almost every light where there are people turning left at a green there are 4-5 cars that go through the red light after it turns.

My safety score on my tesla is 99. I am an extremely safe driver. I wish FSD was more common because so many people are terrible drivers.


Your Tesla tells you you're a safe driver. Well, geez, I'm convinced that the $55,000 (maybe+?) you spent tells you things you like to hear. I mean, one of the criteria is "Late Night Driving". Why not driving between the hours of 4pm and 6pm? When there are a lot more cars on the road, which is just statistically less safe.

https://www.tesla.com/support/safety-score#version-1.2

And looking at the rest of the criteria, they're ok, but hardly comprehensive. This is like, the bare minimum. It doesn't measure what I would call "doing stupid shit". Like crossing several lanes of traffic to get in a turn lane. Forcibly merging where you shouldn't. Nearly stopping in the middle of traffic before merging into a turn lane at the last minute. Straddling lanes because you're not sure if you really want to change lanes or not. Making a left turn from a major artery onto a side street at a place that is not protected by a light. Coming to a near complete stop for every speed bump, then rushing ahead to the next.

And a host of other things that demonstrates the person does not consider other people on the road at all.

Here's an entire article about how to game the safety score:

https://cleantechnica.com/2021/10/14/three-quick-tips-for-a-...

One of the tips is to "drive around the neighborhood when traffic is light".

And the car doesn't ding autopilot for behaviors it would knock a human for. Because the assumption is that the car would know better I guess. But then why isn't the safety score simply a deviation from what autopilot would do in a situation? If autopilot would brake hard to avoid a collision, shouldn't you?


The Tesla doesn't just tell me I'm a safe driver, it also gives me a cheaper insurance rate due to the safe driving. The safety score is related to my monthly insurance premium.

So maybe it's to stroke my ego? But they're also putting their proverbial money where their mouth is.


The insurance you get from Tesla, which has a vested interest in its own Safety Score. It's not giving a cheaper insurance rate "due to the safe driving", it's giving a cheaper rate due to having a better score on the metrics it decided.

You see how that's circular, right. It does not mean you are a safe driver.


…what? No I am not following that. The safety score is a representation of the metrics used to determine the insurance rate. This is not in any way circular. The insurance rate and the safety score are representative of the same thing: safe driving.

https://www.tesla.com/support/safety-score#version-1.2


They're both from Tesla. There's no real proof that they're representative of safe driving. And you kind of want it to flow both ways: "the safety score represents good driving, so I get cheaper insurance" and "I get cheaper insurance so the safety score must represent good driving".

You're trying to use each of these things to validate the other so you can then claim it's something entirely else.

I think it's telling that autopilot is allowed to drive in a manner that would otherwise negatively impact your safety score. Either autopilot is unsafe or the safety score doesn't measure actual safe driving.


Man I think you're really confused about what the "safety score is". Here it is stated a simpler way:

Tesla tracks driving style and gives you an insurance rate based on those driving habits.

My insurance rate is about $90, which is the lowest the rate goes, because the things it tracks (following distance, for one) corresponds to a lower liability risk for the insurance underwriter, which in this case, is tesla.

The monthly price is directly correlated to driving habits. You seem to be getting confused about "safety score".

I'm genuinely confused as to what about this doesn't make sense to you.

Tesla is the underwriter. They give a monthly premium based on driving habits.


I'm not confused.

Tesla is the insurer. Tesla is also the manufacturer of the insured item.

You are using as proof of the claim, the fact that the group making the claim says they're right. This is big "We've investigated ourselves and found nothing wrong" energy.


You are talking to someone who is excited about… The Future (tm). And will defend their purchase and choices to the death. So I doubt there is any reason to continue. The fact they even gave us their Tesla given “safety rating” like it was something to give a shit about says a lot.

We are all glad your car is giving you head pats and game badges.


Also, the Tesla driving score reminds me of the Windows “Experience Index” score. Which was complete and utter toss. But made people feel good about their systems.


> I’m actually afraid to drive behind a Tesla and either keep extra distance or change lanes if possible.

Irrespective of the Teslas or FSD:

If you're afraid of an accident due to the vehicle in front of you braking regardless of the circumstances, then you're following too closely. It doesn't matter if the braking event is anticipated or unexpected. If you're not confident in your ability to avoid an accident if the vehicle in front of you slams on their brakes then you are following to closely.


> If you're afraid of an accident due to the vehicle in front of you braking regardless of the circumstances, then you're following too closely.

I know what you're saying, but that is not what I'd meant. Also, I don't follow too closely.

The difference here is that almost always a driver will brake depending on the happenings in front of them. So, if you pay attention to not only the car in front, but in front of them and in the neighboring lanes and so on, you can sense and also detect patterns in how a particular driver is driving. There are many skittish drivers who brake every 1 second and some don't, and so on. Basically based on your driving experience you can predict a little.

The problem here is that this stupid POC FSD will brake randomly or change lane randomly or whatever, so there is no way you can predict, and hence my concern and issue with it. I just prefer to change the lane, but that's not always an option.


> Basically based on your driving experience you can predict a little.

Yes, we all do that AND you're using that predictability to take liberties in safety such as following too closely. FSD's unpredictability exposes the vulnerability in your driving process and makes you feel uneasy.

You (and everyone else) can follow too closely AND FSD can be an unsafe steaming pile of crap. It's not an either or situation.


If you drive like everyone is trying to kill you, which they definitely are, then whether it's FSD or another idiot makes no difference to you.


This is actually how I tell my kids to drive lol.

“Drive like everyone else is an impatient brain dead dipshit intent on killing you”

“Also, check your mirrors constantly. “


I have it (subscription, just wanted to try it out). I won't use it again. It confidently attempted to drive me into a rather solid object with very little warning. If every other car on the road had this, I would sell all my cars and forbid family members from getting anywhere near the road.

It's nice on the freeway though.


To me it's weird you can voice an opinion like "I won't use it" and then say it's nice on the highway, as if you've given it enough of a trial there to endorse it when you think otherwise it's going to be killing people.


How is it weird that FSD might work well on extremely simple, boring stretches of road but fall apart as complexity increases?


Not that weird, it's just weird to recommend it after you've seen it almost kill you. I doubt he's put in the requisite time/miles/effort/observation/whatever to know that it does great on the highway but not elsewhere.

There's solid objects on highways, too.


Because the Tesla fans will cling to anything to further their arguments.


Pretty sure almost crashing into a solid object seems good enough for giving it a failing score.


I definitely agree, in which case I wouldn't find myself endorsing it for any use. Just because I drove X amount of miles without a problem on the highway doesn't mean it's any less likely to drive into a solid object, why would he recommend it?


I dunno man, according to another poster, I think FSD would probably give itself a safe driver rating of at least 85 for that.


"Works on my machine!"


In fact, there is a huge disconnect between people who design similar systems (Vision, safety, self driving, robotics, simulations) and those who use them. The criticism of Tesla is coming from experts, not jealous competitors.

The problem with systems where 99% is a failing grade, is that 99% of the time, they work great. The other 1% you die. No one is against FSD or safer tech. They are against Tesla's haphazard 2d vision-first FSD.

Wanna hear about this new fangled tech that avoids 100% of accidents, has fully-coordinated swarm robotics between all the cars, never swerves and can operate in all weather + lighting conditions ? It's called a street car.

The words 'road' and 'safety' should never be uttered in the the same sentence.


Yep, I used to be in this field back in 2018, and everyone was extremely dismissive of the rest of the industries skepticism


GREAT comment.

I think a large issue is people are really bad at risk assessment.

Also, why the hell did they try doing camera only? Fucking dumb as hell. I would have trusted something like FSD a lot more if they had also put in very advanced radar etc. trying on fucking /cameras/ only for newer models should be a criminal offense.


It's well known at this point.

When the model S was at its sales peak from 2013-2020, Lidar was still pretty expensive. So he put cameras in the original Teslas.

When Karpathy took over in 2018-ish, Tesla realized that all their advantages (dataset size, quality of researchers) was in the 2D realm. Waymo had been doing lidar for longer, seemed to have better 3d engineers & could keep upgrading lidar devices as they went.

Tesla's insistence on 2D cameras has to do with stubborn desperation more so than a willingness to play catch up. Elon needs 2D cameras to 'win' because even as early as 2019, he thought that it was too late to start playing catch up in 3D.


> The other 1% you die

Um... no one died. I understand your point is hyperbole, but you're deploying it in service to what you seem to claim is serious criticism coming from experts. Which experts are claiming FSD beta killed someone? They aren't. So... maybe the "experts" aren't saying what you think they're saying?

> Wanna hear about this new fangled tech that avoids 100% of accidents, has fully-coordinated swarm robotics between all the cars, never swerves and can operate in all weather + lighting conditions ? It's called a street car.

And this is just laughably wrong. Street cars (because they operate on streets) get into accidents every day, often with horrifying results (because they weigh 20+ tons). They are surely safer than routine passenger cars, but by the standards you set yourself they "aren't safe" and thus must be banned, right?


As advanced cruise control? It's great. As what it's marketed as? It's fraud.


[flagged]


Didn't Musk partly create OpenAI?


He exited OpenAI a long time ago, when something didn't go his way.

Now it's mostly Microsoft and Altman.


[flagged]


AI model improvements:

* Optimus cannot say bad things about Elon Musk anymore.

* Added support for Donald Trump.

* Added support for The Boring Company flamethrower.

* Advertising for Dogecoin.


* ChatGPT will now cite Elon Musk tweets as a source for 50% of questions.


Twitter as ultimate source of truth


I have the same experience. I think if there was more transparency in the data around FSD related accidents the conversation would be different. The last evidence we have is Tesla stating that there have been no fatalities in 60 million miles of driving under FSD. Pretty good so far.


There is no such evidence beyond unsupported proclamations by the vendor who, by the way, has compulsively misrepresented the capabilities of the product for years. The only evidence available that is not completely tainted by a complete conflict of interest is basically Youtube videos and self reported information [1] by investors and super fans that on average show critical driving errors every few minutes and even those are tainted by self interest to over-represent the product.

Tesla’s statements should only be believed if they stop deliberately hiding the data from the CA DMV by declaring that FSD does not technically count as a autonomous driving system and is thus not subject to the mandatory reporting requirements for autonomous systems in development. If they did that then there could actually be a sound independent third party statements about their systems. Until that time their claims should be as trusted as Ford’s were on the Pinto.

[1] https://www.teslafsdtracker.com/


Absolutely need more data, but if a public company makes untruthful claims about the past they are in big trouble. They are mostly free to make claims about the future without much consequence.

This is why startups are allowed to project massive sales next year but Elizabeth Holmes is headed to jail.

If it turns out that FSD has had a fatal crash and Tesla lied about it Musk is headed to jail too.


That’s cute that you think that.

We don’t live in a “just world”. There’s a reason a lower tier person like Holmes gets some prison and psychos like, say, Trump or Musk barely get a slap on the wrist.


Yes if we forget that the FSD may conveniently disconnect right before an accident


For Tesla's collection of data on accidents with Autopilot technologies, they considers Autopilot to have been enabled during the crash if it was on several seconds beforehand. These systems may hand control back to the driver in crash scenarios moments before impact, but that doesn't mean the data will report that the system was not enabled for the incident.


Yeah, let's now hear the stats on "fatal accidents within 30s of FSD being enabled".

You really just can't trust Tesla at all, about anything. There's no integrity there. They won't even be honest with the name of their headline feature!


Regardless of whether or not that's true, that's also not the impressive number you think it is. Human-driven cars have a fatality rate of about 1 per 100 million miles travelled, and that's including drunk drivers, inattentive drivers, and all environments, not just the ones people are comfortable turning on these features in.


Yes. The data collection continues. But this initial report indicates that FSD is probably not massively worse that a human.


This has been my experience as well - I use autopilot a lot and find it works great


Ditto. Most fun I've had in a vehicle in my whole life. A robot drives me around every day, and the worst misfeatures are that it's too timid at stop signs and left turns, makes poor lane choices sometimes (oops, can't turn, will go around), occasionally zipper merges like a jerk (yes, technically you can use that lane, but if you try you'll get honked at -- car doesn't care).

But the company is run by an asshole who people love to hate, so... everything it does is Maximally Wrong. There's no space for reasoned discourse like "FSD isn't finished but it's very safe as deployed and amazingly fun". It's all about murder and death and moral absolutism. The internet is a terrible place, but I guess we knew that already. At least in the meantime we have a fun car to play with.


> FSD isn't finished

"Full self-driving" has not been delivered after years worth of lies. It isn't about "murder and death and moral absolutism". It's about fraud.

Your vehicle is a dead end. It will not be upgraded to the latest hardware. It will not attain "full self-driving".


This is a "me or your lying eyes" argument. It drives my kids to school. Why is it so important to you that I not enjoy the product I purchased and love? You realize you aren't going to win an argument like that, right? You think I'm going to just wake up and realize that it can't do what I see it do?


It doesn't matter how you rationalise it to yourself or how irresponsible you are in using it. Your car does not do full self-driving. It is stranded on Hardware 3. It will not get Hardware 4 unless a lawsuit enforces it: https://electrek.co/2023/02/15/tesla-self-driving-hw4-comput...

Tesla has lied about full self-driving for years: https://www.tesla.com/blog/all-tesla-cars-being-produced-now...

Tesla has admitted to false advertising around full self-driving: https://electrek.co/2022/12/12/tesla-ordered-upgrade-self-dr...

Tesla has admitted that they have failed to deliver full self-driving. Tesla is now scrambling around trying to avoid being done for fraud: https://electrek.co/2022/12/07/tesla-self-driving-claims-fai...

If even Tesla can admit it, find a way to admit it to yourself.


> Your car does not do full self-driving.

It literally drives itself. You're hiding behind a semantic argument about what "full" means, and how autonomy is deployed. It's the "Money Printer Go Brrr...." meme.

You don't seem to be able to understand that the product as deployed has value to its purchasers, and that's leading you to some really emotionally dark places. I mean, it shouldn't be this upsetting to you that I like my car, and yet it clearly is. Just... maybe just don't? Let it go. You know these cars are safe as deployed. Let the Tesla nuts have their fun.


> You're hiding behind a semantic argument about what "full" means

No, I'm "hiding" behind the truth. I'm "hiding" behind the benchmark Tesla very publically sets for itself. Every year. Year after year:

https://jalopnik.com/elon-musk-promises-full-self-driving-ne...

> that's leading you to some really emotionally dark places

Okay, this is just sad now. It's like a kind of Stockholm syndrome. Cult members go one of two ways when confronted with realities which contradict their beliefs: they either finally recognise the silliness of their beliefs or they become even more ardent believers.

It's better to try to find a way to be objective. You shouldn't let people con you and you certainly shouldn't propagate the lie.


I have a feeling the differences between experiences is heavily influenced by where you are located. There is no way they are adequately able to train across all of the locations FSD is expected to be able to work.


Australian here. We drive on the "wrong" side of the road. I have no idea if FSD even accommodates this different scenario. And if it did, and being from Melbourne, having diabolical yet somewhat logical hook turns (turning from the furthest left lane to turn right) I will bet would not even fall in FSD scope.

https://en.wikipedia.org/wiki/Hook_turn?wprov=sfla1


Because of this, if Tesla solves it they are going to have an almost insurmountable moat.


It really depends where you drive, it behaves super well in some conditions, and awfully in others. Hence the wide disparity in experience.


Exactly.

I think a lot of "people" online are paid shills.

HN should severely downrank any "opinion" posts from anon accounts.

I have a Tesla with FSD and it's incredible. Though it drives way worse than me, it drives pretty darn good, and will save a ton of lives and change the world by freeing up a lot of time for people to do more important things.


Is your post supposed to be read as genuine or sarcastic? It looks to me like a strawman of a Tesla owner stereotype, and I don’t want to accidentally eat the onion here by taking it at face value.

Your post appears to be saying that Tesla’s FSD is way worse than human drivers, not safer than them, but we should still welcome it for the convenience. Again, this reads like a strawman. I would like to think the average Tesla owner would not endorse that statement.

The goal isn’t to save time at the cost of lives, it’s to save both, non?


> is way worse than human drivers

No, I said it's way worse than me. Probably around performance of the average human driver.

I am training to be an astronaut, so on the advanced side.


A lot of people say they’re significantly better than average driver. It’s such a stereotypical thing to say, that it probably has negative truth value on average, since it reflects broad personal confidence more than actual driving skill.

You still sound like you’re trying to present the weakest possible straw man for Tesla critics to attack. Stop this bad faith attempt to make Tesla look bad; you’re dragging down the discourse. Nobody is going to take your bait and feed you the expected counter arguments to the nonsense you expressed up-thread.


I suspect that the NHSTA know about FSD from all of the above and more.

Even if Tesla’s current implementation were objectively safer than the average human driver, it would still represent a net negative on safety because of the negative impact the glaring flaws will have on peoples’ confidence in self driving tech.



I've never used it but in my book "Full Self-Driving Beta" is an oxymoron


Until your car decides to randomly slam on its brakes and cause a pile up.


I'm reminded of a thread on HN where someone did a cross country trip and their car only phantom-braked a couple of times, and called that a huge success.


Ha, my new Model 3 phantom braked the first time on the trip home from the service center, which was only 10 miles. I doubt it had even gone over 10 miles on the odometer at that moment.


How were you able to put it into autopilot? Typically, the autopilot is calibrating itself the first 60 or so miles of driving, and it does not let you engage it.


Good question! It didn't argue one bit, literally drove it off the lot at the service center and hit the freeway and turned AP on (it's my second Model 3, so I already have such habits).

It's entirely possible it had more miles than 5 on the odometer when I picked it up. Officially the paperwork says 15, the associate who gave me the car said it was actually less, and the odometer isn't very prominent so I never even looked. Maybe it had 50 miles and was a reject ;). I should go check TeslaFi, since I reenabled that the day I bought the car. I confess that the only way I ever know how many miles my car has is when I see it on TeslaFi.


My 2022 Model Y didn't require any calibration to use autopilot after initially picking it up. IIRC there were 7 miles on the odometer at the time.


If my car slams on its brakes, and this causes you to hit me, then you are exactly the type of person I wish was using FSD.

You're following too close to me for the conditions, you're distracted, or some combination of both.

My car might have to emergency brake for any number of reasons. This should never cause a pileup, and if it does it's your fault, not mine.


That’s.. dumb.

I’m the type of person who keeps a lot of car lengths between me and the car ahead like you are /supposed to/ but I’ve also had someone panic stop in from of me and, due to road conditions, I cannot stop in time and have hit them. Not hard. But did all the same.

FSD and similar aren’t magic bullets that would prevent that. People like you who seem to think they are are also part of the problem.

Would a completely automated roadway be fucking awesome for safety and productivity and relaxation? Yes. Is it going to happen anytime soon? Doubtful.

Blows my mind how many people are totally willing to risk their own safety and everyone else’s using known buggy as fuck software. On the road. At 70mph. In a 2 ton vehicle.

… but it hasn’t been an issue for you! You say. Ok.

But I also watch dumb asses lane hop with barely enough room to pass likes it’s a video game and then act surprised when they wreck. So I guess I’m giving people too much credit.


If someone brake checks and you hit them, you're just as much of a problem. Learn defensive driving and stop driving so close to another driver. Keep a safe distance.


Tailgating pisses me off. I don’t do it.

But this backlash to my comment about hard braking and wrecks is kind of hilarious.

Basically, my point is that Tesla has made egregiously bullshit claims about FSD and autopilot. They have given them misleading names. And people who want to “live in the fuuuuture” have eaten it up and are using the Teslas like they are actually capable of driving themselves 100% of the time. In all conditions.

If anything, it has made people who rely on it seemingly worse, inattentive drivers.

So while I’m glad y’all enjoy your purchased vehicle, don’t pretend all gripes and complaints lobbed at them are invalid or someone else’s fault. That’s asinine and childish.


Because only a FSD car causes piles up... your right, I NEVER saw 90 car pileups on the news before FSD. /s

I'm pointing this out because he said (in italics) "substantially".


It doesn't really matter, we have multiple incidents of FSD causing accidents due to outright mistakes, not even "a human would have messed up here too" situations.


Do we? Becuase most of the news reports about claims blamed on Autopilot or FSD turn out after investigation not to be cases.

In one famous case the policy claimed that the driver was '100% not in the driver seat'. This caused a huge media storm and anti-Tesla wave.

Just recently the full report came out and stated that Tesla AP was not used at all and the drive was driving normally.

https://electrek.co/2023/02/09/tesla-cleared-highly-publiciz...

There are quite few such cases, this one maybe being the one that caused the most media attention.

So I tend to discredit all such reports unless its several months after and a full investigation report has been done.

Do you have links to verified reports of FSD causing such crashes.

It seems to me that phantom breaking could lead to such issues, but I have not yet seen a real report that claims this happened.


Humans also mess up in situations that aren't "a human would have messed up here too".


Sure, but say I consider myself a diligent driver and I've never caused an accident... what does FSD have to offer me? Yet another random opportunity for a car to fail me. Why would I surrender control for that?


The confusion from multiple HN posters here confirms that the "recall" language is inadequate to capture what is happening here.

Obviously attention should be drawn to the fact that there is a critical safety update being pushed OTA, but "recall" is too overloaded a term if it means both "we're taking this back and destroying it because it's fundamentally flawed" vs. "a software update is being pushed to your vehicle (which may or may not be fundamentally flawed...)"

I do think something beyond "software update" is necessary, though - these aren't your typical "bug fixes and improvements" type release notes that accompany many app software releases these days. I don't think it would be too difficult to come up with appropriate language. "Critical Safety Update"?


Obviously attention should be drawn to the fact that there is a critical safety update being pushed OTA, but "recall" is too overloaded a term if it means both "we're taking this back and destroying it because it's fundamentally flawed" vs. "a software update is being pushed to your vehicle (which may or may not be fundamentally flawed...)"

How many times in history has a vehicle recall meant the cars were returned and destroyed?

What makes this situation any more confusing than all the previous times vehicles were recalled?


In this scenario 'recall' is a legal/compliance term. It's appropriately used.


I think recall is just fine. Recall offers no ambiguity in my mind. It means the manufacturer fucked something up, big time. Everything else is just details.

In the current world of forced updates (looking at you Android), the word "update" itself is kind of toxic, and doesn't (and I would argue cant) represent what has happened here, even if its technically more correct.


I love that this is the same guy who wants to send people to Mars one-way and then figure out the return trip later.


Anybody else reminded of that story where the guy gets a sketchy neural implant, then goes to a dystopian mars settlement, and then rips the self-driving feature out of a car with his bare hands?

What was that called again?

/puts on sunglasses preemptively/


Um, what is it?


Total Recall


Having never seen it, I have no context for which details were left out or bent to fit the narrative, but this still feels amazing. Well-earned sunglasses


It is definitely a must-see movie, the original 1990 version with Arnold that is.


I Don't Recall


Right, but to be eligible for the return trip you'd first need to pay the loan you took to go there by working as an indentured servant and buying your oxygen and food at company-set prices, right?


What, you think people should be able to breathe for free?


A scene in Heinlein's The Cat Who Walks Through Walls has the protagonist sternly lecturing a ruffian on the importance of paying for air in a domed city on the moon. I still can't tell if Heinlein was serious; a later scene has the same character demanding a transplanted foot be cut off because he felt it indebted him.


Heinlein might be the kind of person to genuinely believe that. He also was the kind of person who would write a book as an excuse to write a manifesto. Starship Troopers was just him bitching about "kids these days"


Part of the plot of the episode Oxygen (Series 10, episode 5) - https://en.wikipedia.org/wiki/Oxygen_(Doctor_Who)

It's an interesting watch (especially with the context of labor relations in mind).


Life imitates art, you ever seen Total Recall?


Or Spaceballs for that matter? Wouldn't be the first idea inspired by that film.


You'd be surprised how many people want a one-way ticket to Mars. I think Elon can make good money from that alone.


Isn’t Antarctica just Mars-lite? Do you think a mere treaty is hampering settlement there? That’s where my mind goes when people talk about colonizing another planet.


These people are incredibly stupid. They somehow think colonizing mars is easier than regulating the environment here on earth, somehow think elon as dictator is better than any earth government, are interested in putting undertested and sketchy implants into their brain even without a proposed use case, and seem to think that the only reason we don't regularly visit mars is because we don't have a big enough rocket, as if easy access to tons of powdered rust is economically useful.


Flying to another planet on a rocket is obviously very different, the goal is way more interesting than the challenge of living in a hostile environment. Even if it's ultimately a useless exercise for humanity.


Or how people aren't in a hurry to build homes inside an airless radioactive walk-in freezer, which is still way more livable than Mars.


I'm not sure that the people who want one of those will want them if:

* they are not the first (or in the first ~50 or so) to get there

* they have a lot of money and a lot of income on Earth

This is kind of a glory-seeking thing for people who haven't made a name for themselves otherwise. Rich people would want to have a return flight. That is not a recipe for making a ton of money.


Gosh we can only hope all the rich people shoot themselves to mars with no return trip though. What a dream that would be for the rest of us.


think of it this way: you go there, work for a company, in a spare time, you find gold, raise capital to start mining it, got rich, build a huge house with swimming pools, and all the rich attributes, but on Mars. Mars will be colonized with great mixture of science and initiative people. Your kids will grew up in this environment, they'll receive best education and endless room for possibilities.


Sure buddy. You will be able to prospect for and mine gold in your spare time after your one-way trip to Mars, including building all of the equipment you need, or having it sent over by rocket. And you'll be able to own land there too, because homestead laws and stuff will definitely carry over.

If you are already rich on Earth, you don't need to take risks that insane to try. In other words, nobody who can afford a ticket to Mars will pay for one.


Surely elon will allow lots of other people to get rich, instead of just capturing any possible value for himself, since he would literally be a god emperor.

There are no rules on mars, except the ones that the people who could simply push you out the airlock make. Look how elon treats his workers and you will understand how a mars colony with his backing will look, except even worse.


I hate that I don't know if this is satire or not.


I feel like The Expanse will probably be a more accurate picture of the realities of a Mars colony


I really can't figure out if this is serious or not.


Think of it as a startup, everyone who will go first will use benefits, own land, know how to do business of constantly growing colony.


Of all the possible reasons to go to Mars, "economic opportunity" is not even remotely one of them.


You'd be surprised how many people want a one-way ticket to Mars.

Can I nominate someone?


Don't let this distract us from important work building the Solar Lander


Colonization is his goal, guaranteeing a return trip for everyone seems pretty dumb.


> Colonization is his goal

Bullshit. Colonization _might_ be a side effect. Bilking various governments out of subsidies and fat contracts is the goal. Just like every other single Musk venture, it is based entirely on feeding at the trough of federal largesse.


He is self-funding Mars via making businesses like SpaceX which sell products like any other company. If you think there shouldn't be tax incentives for things like electrification, co2 reduction, or whatever other subsidy the government offers then your problem is with your governments management, not the businesses using government programs.


It's only "bilking" if SpaceX doesn't deliver. They do.


Once they are there, can't you just recall them back with OTA updates?


Why do you hate the idea of going to Mars so much?

https://idlewords.com/2023/1/why_not_mars.htm

I would love to go there, even if it meant I died the second I step foot on it. Getting back would be the last thing on my mind.


> Why do you hate the idea of going to Mars so much?

Because there is so much to do on earth? Literally everyone is here. What other answer does anyone else need? All this Mars fandom sounds to me like clinically depressed sci-fi.


I don't mind solitude but even then, going on a an expedition together that you know you won't come back from must be an amazing bonding experience. Can't imagine a more fulfilling destiny than go to another planet.


That literally describes suicide, after 9 months of being trapped in a small spacecraft (“even if it meant I died the second I step foot on it”).


There are plenty of other places you could go travel to where you get to see some amazing wonders right before your guaranteed death. Right here on earth too. Why no go for some of those? You can likely do it for cheaper too.


Not OP but IMO, nothing on earth would compare to another planet.

I'm not into space exploration but sometimes I look up at the moon and it blows my mind that we actually went there. Almost feels impossible even. Mars is that x100.

I personally wouldn't want to die on Mars but I understand the appeal.


Why would I do that? It's not like I want to die and nothing on earth is worth dying for but going to another planet is.


My last thoughts as I gasped for air in the gripping cold of the Martian plain was, "Wait, this is it?"


There are plenty of people who would give their lives to advance civilization for the sake of future generations. The brief pleasure of seeing some amazing wonders is, in comparison, irrelevant.

Death is guaranteed either way.


There is no "advancement" to be gained by merely sending tourists to die uselessly on a tomb world.


Colonization is not tourism.


> I would love to go there, even if it meant I died the second I step foot on it. Getting back would be the last thing on my mind

Seems like a particularly over-the-top suicide method.


Check out the URL you just pasted; it includes a hint.


No need to worry about the return trip if it's unlikely that anyone would survive the trip there, right?


”Six days after the radiation-wrecked colonists emerged from the battered Starship, the Emperor of Mars condemned them all to death by suffocation for ‘insufficiently engaging with his tweets’. Historians debate the meaning of this expression found carved in a rock outside the base.”


The settlement communication log will be just a series of "I need people to be even more hardcore" declarations coupled with lists of people thrown out of airlocks.


I've read two separate sci-fi novels recently which make some passing reference to the remains of a failed Musk settlement on Mars. Definite shades of this.


I get that gloating when your enemies fail is one of life's big pleasures, but would your opinion on the guy change if we lived an alternate universe where Tesla had gotten FSD working?


To answer your question, no, I am utterly unenthusiastic about FSD. When I want to leave the driving to someone else I'll take the train.

I suppose I think that's better in the long run, better for society in addition to being better for the planet. I know it's not a popular take.


Okay, but idlewords' comment is one of those things that sounds like it is a clever point when it is actually not. "Ha, the big talker who breaks things is talking big and breaking things again!" All right, so if today's article had been, Tesla gets FSD working, would he instead have been like "wow, this guy ships things that work, maybe I was wrong about him and Mars?"

No, it's just gloating, right? We get it, he hates Elon, and when you hate someone you'll throw whatever dart you've got on hand at them.


I find Musk ridiculous, which is not the same as hating someone. People on this site have historically had trouble with the distinction.

To answer your question, though, my opinion on his Mars project would change completely if it turned out that cave rescue guy actually was a pedo.


I mean, not to get personal or take this on a tangent, but, I am a longtime reader of your tweets and blog, and you seem to have an actual problem of values with him. You clearly have a direction you'd like society to move in and I feel like I've seen you express dismay that this energetic, instrumentally effective guy is out there making the world into something else.

Is that not the case?


>You clearly have a direction you'd like society to move in and I feel like I've seen you express dismay that this energetic, instrumentally effective guy is out there

Hahaha talk about a loaded question. You see enthusiasm I see manic grift.


What do you think I am asking?


And what if we lived in a universe where he'd fed 5,000 people with seven loaves of bread and a few fish?


“But I did eat breakfast”


Every billionaire with a rocket has made their way to space, except for one. And that is even though SpaceX has transported astronauts a few times now. I wonder why?


I don't think the one-way thing is real. SpaceX wants to re-use their vehicles. It wouldn't make sense to leave spacecraft on Mars.


Considering the energy cost of the return trip, it makes abundant sense.


Propellant for the return trip can be made on Mars using the Sabatier process. This has been known for decades.

(This is why the engines for Starship are designed to use methane).

https://en.m.wikipedia.org/wiki/Mars_Direct


The CONCEPT has been known for decades. Who has put a fuel processing facility on mars? I think there was some minor chemistry experiment on one of the rovers to go in that direction.

The raw energy required to make that fuel, using ANY conceivable process, is extreme though. What energy solution has been proposed? How much solar acreage do you need? How will you keep the dust off the panels? Will there be any option other than sending panels from earth?

All the needed prereqs have zero practical knowledge with them. Nobody has even had the chance to work out kinks yet. It doesn't matter how much elon wants to do something, new shit takes a lot of time, money, and effort to shakedown.


> Who has put a fuel processing facility on mars?

No one, because it hasn't been required or even feasible yet. We don't have the ability to send that sort of payload to Mars yet.

> How will you keep the dust off the panels?

I hear Tesla is working on a humanoid robot.


> How will you keep the dust off the panels?

Assuming there are people involved, give them a broom. ;)


Sweep if you want breathable air and time in the lead lined room.


But it's probably cheaper to just make a new ship.

It's mostly made of iron and steel, that we have here on earth in abundance.

One day, the economics of return trips will work out. But to begin with at least, all trips will be one-way.


Unmanned trips on Starship will definitely be one-way to begin with.

It's highly unlikely that trips with humans on board would be one-way.


My Name is Elon Musk and I Want to Help You Die in Space

https://www.mcsweeneys.net/articles/my-name-is-elon-musk-and...


yeah wonder how will the OTA updates work for MARS spacecrafts, 'cause there is no air in between. Checkmate Musk!


[flagged]


Remember, if you consider other human beings as simply objects for you to do what you like with, you won't feel as bad when you inevitably get them killed


Under promise, over deliver


Over promise, never deliver.


Over promise, accidentally buy a huge social media company way over market value because of a prank, never deliver


Because this is a mandatory OTA patch and does not involve stock being physically returned or taken for repair (as is typically implied by "recall") this headline is clearly very misleading.


Knowing how software engineers write code - I would never ever trust my life to the BS that is FSD.

It is not so much software, as well as <other people on the road> problem.


Folks with Tesla FSD, please find an empty parking lot and drive all you want. It helps Tesla's mission. Please don't test it on real roads. At least not on busy roads


You must be really stupid to use any kind of self-driving feature. I wish you would die in your bed before you can try it on the road.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: