Hacker News new | past | comments | ask | show | jobs | submit login

I. You shift the goalposts a lot. E.g. in the same sentence you mix statements about the future and the present, e.g. "we will" and "today's technology".

> I haven't heard one yet that we won't be able to overcome with today's technology.

BTW, LIDAR doesn't work in snow and rain. Today. It's not clear that it works for multiple independent vehicles, either. We may only have self-driving car, singular.

II. Also, in your scenario, even if you do send that car over to your wife to use once you get to work, the same car will get more wear and tear than two individual cars b/c it's doing the same work plus more to go between you two.

III. You also mix up the moment when self-driving cars become as safe as humans with the moment they become safER than humans.

IV. You also overly dichotomize safer vs. not safer. What if they kill fewer auto drivers, but more pedestrians? What if they kill fewer adults and more childen? It's not that simple- our values are reflected in our choices to deploy technology.




> It's not clear that it works for multiple independent vehicles, either. We may only have self-driving car, singular.

I'm no LIDAR expert, but I do design and build synchronous detectors (think: boxcar/lock-in amplifier/gated counter). Maybe this has been tried and there's some reason it wouldn't work, but I don't see why you couldn't apply a pseudo-random modulation to the LIDAR emitter and then run the received signal through a correlator. Besides bringing some conversion gain and allowing for more accurate distance determination (compared to a simple pulsed-ToF), PRN modulation allows you to share the channel with other transmitters (CDMA, if you will).

> BTW, LIDAR doesn't work in snow and rain. Today.

Isn't this just a matter of the wavelength used by today's LIDARs? There's a huge hole in the liquid and solid water absorption spectrum in the visible range.


I don't know the system, but I'm sure it's already synchronous in that there are numerous receiver channels and their acquisition values are correlated with the illumination angle.

If the system has to deallocate time channels for other vehicles to use, it loses bandwidth, proportional to the number of vehicles in the vicinity.

Sharing the data between vehicles is possible, however, it invokes geographical/networking problems, such as how to partition the acquisition, how to synchronize the acquisition and how to share it, including the necessary bandwidth, which I'm sure is quite high.

As soon as multiple vehicles sharing data enters the picture, security must also get factored in. Besides the link control concerns, the system will be sensitive to jamming with that much multiplexed sampling taking place.

PP is correct in hinting that the leap to multiple self-driving vehicles from one self-driving vehicle will be large indeed.


> I don't know the system, but I'm sure it's already synchronous in that there are numerous receiver channels and their acquisition values are correlated with the illumination angle.

This is not "synchronous detection" in the way I meant it. I don't mean this as a put-down, but it may be instructive to Google "lock-in amplifier," a term I mentioned in my previous post.

All of your other concerns are addressed by appropriate choice of PRN code(s). Additional vehicles, which operate without the need for coordination, merely raise the noise floor. They do not "jam" each other. It should be obvious that a minimum S/N ratio is required for the LIDAR system to work and further that an arbitrarily-high number of LIDAR transmitters therefore cannot coexist. However, it is far from obvious (to me, at least without learning more about LIDAR and plugging in some numbers to a model) that a congested freeway of driverless cars would have "too many" LIDAR transmitters.


No, snow scatters the emitted light and scarcely any is returned at all. A wall covered in ice looks like an air gap.

(I know this because my very expensive 800kg autonomous vehicle almost drove into a lake under my software control due to this effect...)


Aaaah, a ___domain-knowledge expert, thaaaank god!

Actual information on this topic is sorely lacking....

....While we have you on the line, would you please contribute some more information on LIDAR w/r/t autonomous vehicles?

For example, how well does LIDAR work with rain? fog? How does LIDAR interact with other nearby LIDARs? How much power does a typical system emit? How sensitive is it to jamming? How many frames per second can be acquired? How much computing power is used to re-assemble a scene?


Generally LIDAR "works" reasonably in light rain because the rain drops scatter most of the emitted beams, and you get no returns. Occasionally you'll hit a raindrop straight on and get a reflection back to the receiver, but your algorithms should probably filter this out.

Fog is so much denser that you get heaps of reflected returns, and naive algorithms would treat this as there being lots of solid stuff in the environment. Your best bet is to combine sensors that don't share the same EM bands; e.g. lidar + camera, or + radar, sonar, etc. It's not just redundancy, but the ability to perceive across different frequencies so that things that scatter or absorb in one band don't do so in another, allowing you to distinguish.

I don't think one LIDAR would interact with another to any great extent. Even if off-the-shelf models did, there'd ultimately be some way to uniquely identify or polarise the beams such that this wasn't a problem. I suspect its reasonably easy to engineer a solution around this.

Power emission I'm not entirely sure about, but almost all are at least class 2 laser devices. You shouldn't point an SLR camera at Google's vehicles for example, as you can destroy the CCD. The Velodyne they use draws 4-6amps at 12V, but a lot of that power goes to heating the emitter, motors, etc, and isn't all emitted by any means.

Frames-per-second isn't really the right measure for Velodynes, but their max rotation is 3-4 revs per second IIRC. It's about a million points per second. (For something like the Kinect For Windows V2, which is a flash lidar, it should run at 30fps, but with lower depth resolution.)

I can't think how you could 'jam' a lidar, but you can certainly confuse the crap out of it easily enough. Scatter some corner reflectors on the roads for example, dust grenades, fog cannons, etc.

Computing power to reconstruct is significant (many DARPA Grand Challenge teams had problems containing their power budget for CPUs/GPUs) but manageable. It depends on the algorithms used, and in many cases the amount of "history" you infer over. Google's approach is to log everything, post-process into static world maps, then upload those maps back to the vehicles. When they're actually driving for real, those maps effectively let them take the delta between what they currently see and what the static map says there should be, and they only really have to handle the differences (i.e. people, cars, bikes, etc). This is still hard, but it's much easier than e.g. the Mars Rover problem (more my area of experience)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: