Right. What I meant was that when the Lytro's image is 'focused' on a subject, the background actually is out of focus instead of just having a fancy blur algorithm run on it to simulate being out of focus.
Well, but maybe it's possible to make a compromise? I mean, for a stationary subject the Google approach (camera movement) and the Lytro approach (microlens array) captures the same data (with the caveat that Google apparently only uses a linear path).
I could imagine a system which captures a whole lot of frames with some camera shake, identifies which parts of it moved, and then computes "real" lightfield bokeh for the stationary background while filling in with just image-space blur for the gaps.