Hacker News new | past | comments | ask | show | jobs | submit login

Don't LIDAR/radar sensors (I'm not familiar with exactly what the options are) have benefits that vision doesn't have, like working in poor lighting/visibility? Why would Tesla move away from these sensors?



Lidar has advantages over cameras but it also has some downsides. Sunlight, rain, and snow can interfere with the sensors, as can other nearby lidar devices (though de-noising algorithms are always improving). There are also issues with object detection. Lidar gives you a point cloud, and you need software that can detect and categorize things from that point cloud. Because lidar is new, this problem hasn't had as much R&D put into it as similar problems in computer vision.

Then there's the issue of sensor fusion. Lidar requires sensor fusion because it can't see road lines, traffic signals, or signs. It also can't see lights on vehicles or pedestrians. So you still have to solve most of the computer vision issues and you have to build software that can reliably merge the data from both sensors. What if the sensors disagree? If you err on the side of caution and brake if either sensor detects an object, you get lots of false positives and phantom braking (increasing the chance of rear-end collisions). If you YOLO it, you might hit something. If you improve the software to the point that lidar and cameras never disagree, well then what do you need lidar for?

I think lidar will become more prevalent, and I wouldn't be surprised if Tesla added it to their vehicles in the future. But the primary sensor will always be cameras.


Because LIDAR is expensive and available in limited quantities. There's no way they could sell the amount of cars they are selling right now if each one came with a LIDAR.

Camera modules are cheap and available in huge quantities.




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: