Hacker News new | past | comments | ask | show | jobs | submit login

I agree with you. I would really like to see datasets that reflect how things actually are. I think it would be really dangerous to jump to FSD being safe on the basis of the data I shared. However I would hope that whatever opinions people shared were congruent with the observed data. I don't feel like the prediction that Elon Musk and Tesla not caring about safety is congruent with the observed data, which shows the autopilot has improved safety, best explains the observations of improved safety.

Just for context - I've been in a self-driving vehicle. Anecdotally, someone slammed on the breaks. The car stopped for me, but I was shocked: for hours before this the traffic hadn't changed, it was a cross country trip. I think I would have probably gotten in an accident there. Also anecdotally, there are times where I felt the car was not driving properly. So I took over. I think it could have gotten into an accident. Basically, for me, the best explanation I have for the data I've seen right now is that human + self-driving is currently better than human and currently better than self-driving. The interesting thing about this explanation is how well it tracks with other times where we've technology like this before. In chess playing for example, there was a period before complete AI supremacy (which is what we have now) where human + AI was better than AI.

I like the idea of being safe, so if the evidence goes the other way, advocating for only humans or only AI doing the driving, I want to follow that evidence. Right now I think it shows the mixed strategy is best and that is kind of nice to me because it implies that the policy that best collects data to reduce future accidents through learning happens to be the policy that is currently being used. I like that.




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: