The light field technology has a lot of interesting applications outside of photography as such. I see a lot of sceptical comments here about the potential of this tech for creative or artistic purposes, and I wanted to comment that it has been the subject of a lot of research in computer vision.
Basically, the light field lets you reconstruct a series of images of your subject as seen from every patch of your lens. Of course, for subjects not at infinity, this means multi-view images. This enables
(a) 3D reconstruction, by multi-view matching and having precisely known geometry.
(b) segmentation, by using 3D reconstruction as mentioned above.
(c) Super-resolution, again by multi-view matching, which can let you get back some of the resolution you lost by capturing the image as a "light field" (which means that your sensor can tell which direction the rays hitting it come from - it's done by having an array of lenses in front of the sensor, each covering a patch).
Better segmentation of the image enables all sorts of higher computer vision algorithms.
Whether this technology is any good as a consumer photography product, I am not the one who can say.
Another interesting application is as a display. A display w/ a lens array to pixel configuration matching the one of the sensor is able to re-create the light field for 3d viewing.
Well, refocusing is what the marketing and the reviews of this camera focus on (ha, I kill me...), but personally I find the 3D possibilities a lot more interesting. On their not-very-usable website, the refocused shots with most of the image blurry look like gimmicks, while the ones with a 3D effect (they are reconstructing the view from different points on the surface of the lens, letting our point of view move around the scene a little bit) look more promising. I don't know if I would buy the camera just for that as a consumer; to me personally, it would be more interesting in a computer vision product.
Returning to the focus, it's true that this has some potential to turn the photographer adage "f/8 and be there" into "f/2 and be there", letting you keep more of the light while the subject is still in focus. However, this is not automatic, since multi-view matching has to be done to superimpose the images of the subject captured from different points of the lens; and this will bring in the difficulties we have with stereo matching, such as occlusions (some parts of the subject can be seen from one point of the lens, but not from another), or ambiguities (it's hard for an algorithm to tell which points of the pictures correspond to the same point of the subject).
Basically, the light field lets you reconstruct a series of images of your subject as seen from every patch of your lens. Of course, for subjects not at infinity, this means multi-view images. This enables
(a) 3D reconstruction, by multi-view matching and having precisely known geometry.
(b) segmentation, by using 3D reconstruction as mentioned above.
(c) Super-resolution, again by multi-view matching, which can let you get back some of the resolution you lost by capturing the image as a "light field" (which means that your sensor can tell which direction the rays hitting it come from - it's done by having an array of lenses in front of the sensor, each covering a patch).
Better segmentation of the image enables all sorts of higher computer vision algorithms.
Whether this technology is any good as a consumer photography product, I am not the one who can say.