Hacker News new | past | comments | ask | show | jobs | submit login

The person above you has no idea what they’re talking about. There’s literally hundreds of people at Tesla whose job is QA and tools to support QA



And how does that change anything about my statements?

Yeah, they have QA. But for the problem they claim they’re solving (robotaxis) and speed of pushing stuff to customers (on the order of days) it vastly, vastly insufficient. And it lacks any safety lifecycle process regards - again, just look at the timelines. Even if you’re super efficient, you cannot possibly claim you can even such a basic things like proper change management (no, commit message isn’t that) or validation.


> it lacks any safety lifecycle process

completely demonstrably false

> speed of pushing stuff to customers (on the order of days)

this is also false and doesn't happen

> you cannot possibly claim you can even such a basic things like proper change management (no, commit message isn’t that) or validation.

you know absolutely nothing about the internal timelines of developments and deployments at tesla and to suggest it's impossible without that knowledge is just dishonest


> > it lacks any safety lifecycle process > completely demonstrably false

Head of AP, testified under oath, that they don't know what's Operational Design Domain. I'll just leave it at that.

> > speed of pushing stuff to customers (on the order of days) > this is also false and doesn't happen

Never ever Musk tweeted about .1 fixing some critical issues coming in next few days? I must live in a different timeline.

> > you cannot possibly claim you can even such a basic things like proper change management (no, commit message isn’t that) or validation. > you know absolutely nothing about the internal timelines of developments and deployments at tesla and to suggest it's impossible without that knowledge is just dishonest

Let's assume I have no internal information. If it looks like a duck, swims like a duck, and quacks like a duck, then it probably is a duck.


> Never ever Musk tweeted about .1 fixing some critical issues coming in next few days? I must live in a different timeline.

Lets say you have a baby which is being born. You tweet, birth in ten days! You can't then say, look: here is a tweet, it proves that baby development in tweeters actually has a several day lifecycle and moreover it proves that baby having mother's don't do proper pre-birth routines, because the tweet isn't the process that created the baby.

It is separate.

> If it looks like a duck, swims like a duck, and quacks like a duck, then it probably is a duck.

Right. So the fact that we have video evidence of internal processes, including QA processes, is much more like looking, swimming, and quacking like a duck much like having video evidence of a mother with a round belly for months before the tweet would be evidence that the babies don't take weeks to develop.

So when Elon has also tweeted that a launch was delayed, because of issues that were discovered - which does happen, as I'm sure you know if you follow his tweets as you imply - then that would be evidence congruent with the video evidence we have of QA processes existing within the company.


> and speed of pushing stuff to customers (on the order of days)

well, if you don't get the software pushed to the QA team (the customers), how else are they going to get it tested?


can we please stop with this disinformation? the customers are not the QA team.


Even AP is is "autosteer (beta)" so I sure do feel like part of their QA team. And it drives like a slightly inebriated human.

I do have high hopes that the work they've done on the FSD stack will make for a significant improvement to basic AP whenever they get merged (assuming it ever happens; it has been talked about for years). That'd be nice.


what do you call them? there's no way possible that they can make changes to the software and have them thoroughly vetted before the OTA push. Tesla does not have enough cars owned by the company driving on public roads to vet these changes. The QA team at best can analyze the data received from the customers. That makes the customers the testers in my book.


the fact that you don't know how tesla vets these changes, very extensively, prior to any physical car receiving the update reinforces that you have no idea what you're talking about

tesla does extensive, meaningful vetting of these updates. i'll let you do the research yourself so that maybe you can quit spreading misinformation


so well vetted that issues keep happening, so well vetted that a OTA "software patch" is raised to the level of automotive recall. if you call that misinformation, then, "boy, i don't know".


I think you are just correct that customers are acting as testers. I think the issue he is having isn't on that point, where you are right.

It is well known that issues happen despite extensive vetting. The presence of vetting does not exclude the absence of issues. For example, lets say there are 1000 defects in existence and your diagnostic finds 99% of them. So it finds 990 defects. So now there are 10 defects remaining that are not found. Next you have another detector that finds all defects. How many defect does it see? 10. It is usually going to see about ten defects. Your expectation given vetting is taking place should that be you will tend to observe about ten defects in this particular situation.

So lets say you are someone who is watching that second detector. You observe that you keep seeing defects. Probably, because of selection bias on results, you observe this with an alarming 100% rate of finding defects - the few times no defects happen it doesn't get shared for much the same reason we don't pay attention to drying paint. Can you therefore claim there is no diagnostic that is detecting and removing defects?

Well, not really, because you aren't observing the defects unconditional on the process that removed them. Basically, when you observe 10 defects, that doesn't mean there weren't 990 other defects that were removed. It just means you observed 10 defects.

So the actual evidence you need to use to decide whether there is a diagnostic taking place is evidence that is more strongly conditional on whether it is taking place. You need something that can observe whether the 990 are being caught.

In this case, we have video evidence of extensive QA processes. So that is much stronger than the evidence that defects show up, because defects show up both when we do have a vetting and we don't have a vetting. And each reasonable test case ought to be decreasing the chance of a particular defect that the test case is testing for.

For a much more thorough treatment on this subject check out:

https://www.lesswrong.com/tag/bayesian-probability#:~:text=B....

So that is basically why he disagrees with you.

Tesla definitely needs to root cause the defects that were found and make improvements on the existing vetting process but it is very obvious they do have these processes.




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: