Hacker News new | past | comments | ask | show | jobs | submit login

My thought: LIDARs and other kind of sensors are much better than human eyes. I'd expect the current self-driving cars to have already better sensory resolution than human drivers.



LIDARs are not better sensors than our eyes. They are different, but not better.

The human eye is an amazing instrument. Contrast range of 24 stops vs. 16 in a top of the line camera. (Think 24 bit ADC vs. 16 bit). Follow that up with 24 bit color depth and the ability for our eyes to automatically adjust what portion of the spectrum we're focusing on, and you've got a spectacular sensor.


Agreed about the eye. They are great for looking at whatever you point them at, which for drivers in my area is usually their iPhone. I'd be surprised if humans were terribly good at perceiving a meter-high object that appeared in their lane at a distance of 50m, because their occuli are occupied perceiving other things. I'm sure a self-driving car that had an unblinking human eye would be a fantastic instrument.

Anyway the LIDAR is not the only instrument you can put in an autonomous vehicle, right? You can have optical cameras, stereo cameras, infrared, whatever.


Are they? Visual acuity at the fovea is around 1 arc minute. Translating that to 150 by 100 degrees of visual field (humans have a larger visual field) gives a resolution of 9000 by 6000 pixels.

Of course, the human eye doesn't work that way: resolution drops off rapidly away from the fovea. Many electronic sensors do work that way, though. So, individual sensors may, for some tasks, be inferior to human 20/20 vision.

I googled to find out whether the sensors on Google's driverless cars had anisotropic sensors (a simple solution would be to have a wide-field and a telephoto camera), but cannot find good info. Anybody know more?


I'm skeptical at far enough distances LIDAR is accurate, due to noise and the falloff of light's subtended solid angle with distance. I am not a sensor expert but if there is a mattress on the highway far ahead (think 0.1-0.2mi or so) I would not be surprised if am going to see it due to the color contrast and my brain's pattern recognition well before LIDAR will detect it by bouncing light off of it.


At highway speeds of 70 miles per hour, a car covers 0.1 miles in 5.14 seconds. So, with your range of 0.1-0.2 miles, a human would have 5-10 seconds to perceive the problem and stop.

Google's LIDAR can see approximately 60 meters out. At 70mph, this gives the autonomous car under 2 seconds to perceive the problem and stop.

Most humans have a perception-reaction time of approximately 2.5-3.0 seconds on a highway, so the human range drops to 2-7 seconds. Factor in the time it takes to apply a maneuver (move foot to break pedal, apply break) and it drops even further -- I'm not certain by how much -- likely another 1-2 seconds. At your lower bound of 0.1 miles, I think the autonomous car already has the advantage.

I think it's clear that a computer would react and apply a maneuver tremendously faster than a human. I also think it's obvious that to have a clear advantage over a human on a highway, these sensors need additional range. From what I understand, Google is working on their own LIDAR sensor (an upgrade from the Velodyne they use now), so perhaps they also have this in mind.

I'd love to see more data on this stuff.


You're discounting the actuation time for the machine, as well as latency in the system. Nothing is instantaneous. I've seen driving systems with 1-2 second latency from sensor data in to actuate brakes.


Yeah and I think a larger problem is deer or other animals running into the road especially at night. I don't know if the Google car can handle that situation yet, but the sensors should be able to see animals way before humans can.


Many (most?) car/deer/moose collisions are unavoidable even for a human driver. In particular, because a human driver focuses on monitoring the road ahead, they cannot determine instantaneously whether it is optimal to continue straight and hit the animal or to swerve into oncoming traffic or the adjacent lane. The reflex reaction is to swerve which can often cause an even worse accident.

Humans also can't process the trade-off of hitting the animal vs. attempting to avoid it based on size fast enough whereas a self-driving car could conceivably be able to.


I would guess, too, that there are mitigation actions that can be taken by computers that humans don't have the reflexes to handle.

For example, I was always told in driving school as a kid that if you're about to hit a deer, you should let up on your brakes right before you hit them. This is so the bumper of the car comes back up and hits the deer center, as opposed to the bumper hitting the deer's bottom half, possibly bringing the deer up on to the hood/windshield where it could injure passengers.

I have no idea whether that's actually true or not, but a self-driving car could easily handle something like that, or automatically turn a car to take the impact of the deer on the side of the car without passengers, whatever the best course of action is.


Plus what can you car do to save you given the context of the accident it's about to have? That's a very cool idea.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: