Neural compression is an emerging field and already shows some striking compression abilities, especially if the compressor/decompressor includes a large model which amounts to something like a huge pre-existing dictionary on both sides.
Stable Diffusion XL is only about 8 gigabytes and can render a shocking array of different images from very short prompts with very high fidelity.
Sure. The only reason image generators aren't deterministic is that you inject randomness. Set the same random seed, get the same image. Download Stable Diffusion XL and run it locally and try it.
There are models that can be run in both directions. Take a source image and generate a token stream prompt for it. That's your compressed image. Now run it forward to decompress.
CPU intensive but massive compression ratios can be achieved... like orders of magnitude better than jpeg.
It's lossy compression, so we're not violating fundamental mathematical limits. Those bounds apply to lossless compression.
Well, the value proposition of image formats are 1. transmission, which requires both sender and receiver to have the exact same model, which requires us to standardize on some model and ship it with everything until the end of time, and 2. archival, which would require storing the model alongside the file (which might more than counteract any data saved from improved compression) and would be highly fraught because, unlike existing decompression algorithms, it cannot be described in simple text (and therefore reimplemented at will), which risks making the file inaccessible and defeating the point of archival.
It's a cool idea, especially for high-bandwidth, low-value contexts like streaming video calls, but I don't think it's going to wholesale replace ordinary lossy formats for images or prerecorded video delivery (and this is without considering the coding/decoding performance implications).
My personal experience with VR is 1gbps is plenty, the issues with VR more boil down to things like latency (for instance, streaming a quest wirelessly with VR desktop basically requires Ethernet, with regular wifi the experience is just awful).
Only so much data the human brain can pay attention too. And for big data you are running in data centres anyway while you are sending control data back and forth from home.