>My guess is that it is only good for monochromatic light.
Does anyone know how much of a limitation that'd be in practice though? CCDs are monochromatic too, which we deal with using either Bayer filters (usually RGGB IIRC) or (in fancier high end stuff) a splitter prism leading to 3 separate CCDs. Modern computational photography is also increasingly able to do sensor fusion, even between different cameras. I can see how making use of this would still be an extra challenge in a small form factor, because normally the split/filter happens after the lens which simplifies things a lot. Having to do 3x multisensor fusion with meta material fisheye only definitely would be more effort and bulkier. Seems like it might still be pretty useful though depending on final cost? Otherwise maybe it'll end up niche, but it's really cool research anyway.
CCDs aren't monochromatic though: they detect a wide band of frequencies, they just can't tell the difference. High dispersion in the lens is a problem because a blue photon and a red photon originating from the same spot on the same object, hitting the same spot on the lens, will be deflected to different spots on the CCD. This causes fringing and fuzziness in the image.
Does anyone know how much of a limitation that'd be in practice though? CCDs are monochromatic too, which we deal with using either Bayer filters (usually RGGB IIRC) or (in fancier high end stuff) a splitter prism leading to 3 separate CCDs. Modern computational photography is also increasingly able to do sensor fusion, even between different cameras. I can see how making use of this would still be an extra challenge in a small form factor, because normally the split/filter happens after the lens which simplifies things a lot. Having to do 3x multisensor fusion with meta material fisheye only definitely would be more effort and bulkier. Seems like it might still be pretty useful though depending on final cost? Otherwise maybe it'll end up niche, but it's really cool research anyway.