Hacker News new | past | comments | ask | show | jobs | submit login

Another product-in-search-of-a-problem from Lytro. It's entirely likely that I'm wrong, but I assume that there are legions of problems to be solved in the machine vision world which Lytro could help solve. For example, could Lytro: help cars see other cars better; improve surveillance capabilities (yes, yes, surveillance is bad); improve robotics; be useful in medical diagnosis (Lytro for x-rays?). I have a friend at Lytro and asked him these questions and his response was basically "no, no, consumer photography is where it's at!"



Seems unnecessarily harsh.

I do think this should help with some things, certainly. Seems this could much more quickly determine distance between some things than a 3d mapping of items would take.

I would be curious if there are numbers on this regard. But, I don't know any reason right off to dismiss it. It is not that uncommon for photographs to focus on the wrong item. This basically solves that problem, right?


Harsh: agreed; the way I worded that reflected my frustration and was unnecessarily harsh. That said, I am frustrated that what-seems-like-a-great-technology is being used so ineffectively. Lytro was chatting with a car company about using the sensors on their cars and, long time constants aside, that appears to have been for naught.

Photography: the camera on my phone has been good-enough for me for years, so I'm a poor judge of camera needs. But I just haven't heard any photo-nerd complain about the problems with focus on a camera, so I am not sure there is an actual problem here that Lytro is solving.


How do you envision this being better than other sensor technology? Seems that car sensors would be better off just adding additional sensors instead of this. No need to image process to determine distance from items, when you can do a distance sensor, right?


Raytrix has already been doing this: http://raytrix.de, however stereo-cams, time of flight, and structured light approaches are usually more effective. You can do everything you can do with a light-field array camera with 3+ lenses and some clever algorithms.


Correction: you can do everything Lytro is doing with lightfields with 3+ lenses and some clever algorithms. Lightfields are incredibly useful rich 4d datasets with massive potential (would you like a side of automatic shader generation with your 3d scan?) but Lytro is totally blowing it by focusing (no pun intended) on this trivial little aspect of it.


I have a friend who is exploring using light-field cameras for photography in bubble chambers: https://en.wikipedia.org/wiki/Bubble_chamber


I like that they can use software in the camera to cut glass out of the lens. I have some pretty hefty glass, and if this is executed right (quality wise) it could be a big win -- even if they fix the focus like a traditional camera.


From one of their videos, they emphasize that the point is not necessarily refocusing still photos, but producing interactive still photo end-user experiences. That's a very intriguing idea to me.


Making tools that were expensive or difficult to use, inexpensive and easy to use, means that scientists and students can make use of them to discover and invent the things you suggest.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: