Hacker News new | past | comments | ask | show | jobs | submit login

The ops folks at a company I used to work for tried a VR workspace to put all of their graphs and terminals in a big sphere around you. With 2k screens, the text got too pixelated to read very quickly. 4k should improve that somewhat, but I'm not sure it will be enough for a great text-based workflow.



Even at 4k per eye, if you imagine a screen at a typical viewing distance, the "dot pitch" of the display is going to just be massively less than a good quality high end monitor sitting on your desk.

We've been waiting like 10 years for that to change since Oculus Dev kit days, and its still not solved today. Advances in pixel density in this space have been incredibly slow.

I think it could be a very long time before a headset can simulate a really great display well enough for me, but other's mileage may vary.

Even with "foveated rendering" the peak dotpitch (the highest pixel density it can acomplish) simply isn't going to be good enough for me - it can't be any sharper than the dot pitch of the panel in front of the eye.

A 5k iMac has 14.7 million pixels - the pixel density needed to do this as well as a "real" display in VR could be pretty massive.


I agree completely. A few months ago, I purchased a Meta Quest Pro. Relative to the Quest 2, the Pro’s resolution blew me away. And it’s still not even close to usable for real work on virtual monitors.


This, totally. I’m interested to see how this compares with the Varjo offerings wrt foveated rendering.

Reading text in VR is generally a horrible experience, and “4K per eye” does not equal even a single 4K screen.

That said I would be happy with 8 1080p screens.


It's not 4K, though. They're not giving a lot of information, but "23M pixels" for two eyes is 11.5M pixels per eye. 4K is 8.2M, so this is 40% more pixels than 4K.


11.5m per eye is still far short of what would be needed to approximate pixel pitch of many of Apple's "retina displays" at typical desk viewing distance display well, FWIW. This a really hard problem with tech we have today.

Whether its 8m or 11m or even 15m pixels isn't the point with regards to using it to replace desktop monitors - the point is the necessary density to compete with excellent real life physical displays is really high.

Your VR monitor only ever really uses a subset of the total pixel count - it still has to spend many of those pixels to render the room around the display(s) too.


The display system boasts an impressive resolution, with 23 million pixels spread across two panels, surpassing the pixel count of a typical 4K TV for each eye.


Thats still enormously less than the dot pitch of a good 4/5/6k monitor in meatspace/real life today - remember, a virtual monitor only ever uses a subset of the total pixels in a VR headset, which is why the pixel count has to be sky high to compete with real life.


Yeah, with VR headsets you generally only get to count the pixels for each eye since parallax vision means that you only have that many degrees of freedom to produce a color.


Was this before the advent of VR headsets that do eye-tracking + foveated rendering? With the tech as it is these days, you're not looking at a rectangle of equally spaced little dots; almost all of "the pixels" are right in front of your pupil, showing you in detail whatever your pupil is trying to focus on.


For what it's worth, this was with an HTC Vive of some kind. However, the screen pixel densities don't change when you do foveated rendering, it's more of a performance trick - the GPU focuses most of its compute power on what you are looking at.


> the screen pixel densities don't change when you do foveated rendering

That's the limited kind of foveated rendering, yes.

Apple has a system of lenses on a gimbal inside this thing. Which is precisely what's required to do the (so-far hypothetical) "full" kind of foveated rendering — where you bend the light coming in from a regular-grid-of-pixels panel, to "pull in" 90% of the panel's pixels to where your pupil is, while "stretching out" the last 10% to fill your peripheral vision. Which gives you, perceptually, an irregular grid of pixels, where pixels close to the edge of the screen are very large, while pixels in the center of the screen are very small.

The downside to this technique is that, given the mechanical nature of "lenses on a gimbal", they would take a moment to respond to eye-tracking, so you wouldn't be able to immediately resolve full textual detail right away after quickly moving your eyes. Everything would first re-paint just with "virtual" foveated rendering from the eye-tracking update; then gradually re-paint a few hundred more frames in the time it takes the gimbal to get the center of the lens to where your pupil now is.

(Alternately, given that they mentioned that the pixels here are 1/8th the size in each dimension, they could have actually created a panel that is dense with tiny pixels in the center, and then sparse with fatter pixels around the edges. They did mention that the panel is "custom Apple silicon", after all. If they did this, they wouldn't have to move the lens, nor even the panel; they could just use a DLP mirror-array to re-orient the light of the chip to your eye, where the system-of-lenses exists to correct for the spherical aberration due to the reflected rays not coming in parallel to one-another.)

I'm not sure whether Apple have actually done this, mind you. I'm guessing they actually haven't, since if they had, they'd totally have bragged about it.


I'm guessing from this comment that you may not know much about optics or building hardware. Both of the solutions you have proposed here are incredibly bulky today, and would not fit in that form-factor.

> The custom micro‑OLED display system features 23 million pixels, delivering stunning resolution and colors. And a specially designed three‑element lens creates the feeling of a display that’s everywhere you look

They have advertised that there are 3 lenses per eye, which is about enough to magnify the screens and make them have a circular profile while correcting most distortion. That's it - no mention of gimbals or anything optically crazy.


>Apple has a system of lenses on a gimbal inside this thing.

Do you have a source for this?


I'm thinking there is confusion with the system used to set the PD (distance between eyes). Of course there are not many details, but it does look like there's a motorized system to move the optics and screens outwards to match the PD of the user.


I think the key to that would be a design of interface which is a step beyond "a sphere of virtual monitors" where zooming was not just magnifying but rather a nuanced and responsive reallocation of both visual space and contextual information relevant to the specific ___domain.




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: