Hacker News new | past | comments | ask | show | jobs | submit login

> it does raise serious questions as to is the models used for autonomous cars could also "hallucinate" and do something stupid "on purpose"...

It doesn't because Tesla's FSD model is just a rules engine with an RGB camera. There's not "purpose" to any hallucination. It would just be a misread of sensors and input.

Tesla's FSD just doesn't work. The model is not sentient. It's not even a Transformer (in both the machine learning and Hasbro sense).




> rules engine with an RGB camera

I dont think its true? They use convolutional networks for image recognition and those things can certainly halucinate - e.g. detecting things that are not there.


I guess what the grandparent means is that there is some good old "discrete logic" on top of the various sensor inputs that ultimately turns things like a detected red light into the car stopping.

But of course, as you say, that system does not consume actual raw (camera) sensor data, instead there are lots of intermediate networks that turn the camera images (and other sensors) into red lights, lane curvatures, objects, ... and those are all very vulnerable to making up things that aren't there or not seeing what is plain to see, with no one quite able to explain why.


You were correct in the first half. My ultimate point was that hallucinations in this sense are just computational anomalies. There is no human "purpose" to them as the post that I was responding to was trying to infer.



We need to stop anthropomorphising machines. Sensor errors and bugs aren’t chemicals messing with brain chemistry even if it may seem analogous.

Or maybe when I get a bug report today I’m going to tell them the software is just hallucinating.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: