The difference is that Google's effect is faked in software and relies on the huge depth of field of a tiny sensor. The Lytro actually does what Google is simulating.
That's like saying my MP3 player from the early 2000s had a microphone and cost $200, so why would anyone buy thousands of dollars of professional audio equipment anymore to record music?
>> The Lytro actually does what Google is simulating.
But wait a second. We're talking about the Google Camera lens blur function, right? That feature is only really simulating out of focus blur, which is not really the same as what Lytro does.
What Lytro does more akin to focus bracketing, isn't it?
Right. What I meant was that when the Lytro's image is 'focused' on a subject, the background actually is out of focus instead of just having a fancy blur algorithm run on it to simulate being out of focus.
Well, but maybe it's possible to make a compromise? I mean, for a stationary subject the Google approach (camera movement) and the Lytro approach (microlens array) captures the same data (with the caveat that Google apparently only uses a linear path).
I could imagine a system which captures a whole lot of frames with some camera shake, identifies which parts of it moved, and then computes "real" lightfield bokeh for the stationary background while filling in with just image-space blur for the gaps.
That's like saying my MP3 player from the early 2000s had a microphone and cost $200, so why would anyone buy thousands of dollars of professional audio equipment anymore to record music?