But you are filtering the LIDAR data (source: I have I Velodyne LIDAR in my office). Yes, one stray hit will be ignored. But, you aren't getting one stray hit. You get 20 hits on this revolution, 6 on the next, 21 on the next, and so on, and then use various algorithms to determine if you are seeing objects or noise. Recall that besides the inaccuracy of the spinning lidar (it does not hit exactly the same spot even when sitting still and hitting an immobile object), the car is moving. Any decent Bayesian filter derives a heck of a lot of information from these changes. I'm not talking about the physical change, which is also important (change of position = change in angle = different reflection point). I'm talking about how your process model generates information via hidden variables - from position over time we derive velocity, and the correlation between velocity and position increases our accuracy. In the context of the LIDAR, the filter can detect that there is a small clump of points moving towards the car at 70 mph (from it's point of view, of course from our POV it is the car approaching the debris at 70 mph). Reflections that are true noise will not be clumped, and will not consistently have a velocity that matches the car's.
With all that said, I've just played with the LIDAR, I can't give you lower detection limits. But with proper algorithms it is better than the static computation suggests.
Of course. Filters do wonders - but they're still not a panacea.
You also get the covariance across that filter. Which means you still need to decide how much to cut, whether or not to reject as rain, dust or a reflection, whether to trust your reflections under a certain reflectivity value. Data helps with this, but, as always, there's a ton of corner cases.
With all that said, I've just played with the LIDAR, I can't give you lower detection limits. But with proper algorithms it is better than the static computation suggests.