> it's flawlessly undetectable in terms of brightness -- the interface white and grays don't change at all.
In order to pull this off, you need to know exactly how many nits bright the display is, as well as having complete software control of the actual hardware brightness. On Windows, you have neither. Enabling HDR mode completely throws off your current colors and desktop brightness and you have to reset both your physical monitor settings and dial in the new desktop white point with a software slider Microsoft buried in the advanced HDR settings (that almost nobody knows how to use) to hopefully be somewhere in the vicinity of what you had before.
When it comes to display technology, having vertical integration is a huge benefit. Look at high-DPI: state-of-art on Windows in 2020 is nowhere near as good from a software implementation or actual user experience point of view as it was on day 1 when Apple introduced Retina MacBooks back in 2012.
Mac OS has also had system-wide color management and calibration (ColorSync) since the early 90s, part of its legacy of being the preferred platform for desktop publishing.
On Windows the systemwide "color management" basically consists of assigning a default color profile that applications can choose to use - which is generally only done by professional design/photo/video software, and not by the desktop or most "normal" apps.
Windows color management is pretty much just a folder where ICC files go to die. Everything needs to be done by the application. Nothing is color managed by default. This alone makes HDR displays (which aren't sRGB in HDR mode) less than worthless on Windows. The situation is identical on Linux, which copied the Windows approach.
Is there a way to get Windows to behave better? I was sorely disappointed by how things look in HDR mode, especially the fact that gamut seems to be abysmal compared to the same display in SDR mode, not even considering the terrible black level artifacting :/
I think Windows provides all the tools necessary for accurate color management, it's just that not everyone has done the necessary bookkeeping for it to work.
The underlying problem is that APIs describe colors in the display colorspace. #ffffff means "send full power to the red, green, and blue subpixels"[1], without describing what color the red, green, and blue subpixels are. That was not a major problem until relatively recently; every display used the same primary colors, so there was no need to specify what colorspace you were sending values to the OS in. But, then it became cheap and easy to use better primaries ("wide gamut"), and we had a problem. Every color written down in a file suddenly became meaningless; an extra piece of information would be required to turn that (r, g, b) tuple into a display color. So, everyone kind of did their own thing! Image formats long had a way to tag the pixel data with a colorspace, so images with tags basically work everywhere. Applications can read that and tell the OS that colors are in a certain color space, and it can map that to your display. Most applications do that; if you have a wide-gamut display and take an AdobeRGB-space image off your digital camera, the colors will be better than if you looked at it on an sRGB display. Even web browsers handle this fine; if they are presented with an image with a colorspace tag, they'll make sure your monitor displays the right colors.
The problem is sources of color data that don't have a tag. CSS is a big offender. CSS doesn't specify the colorspace of colors, so typically browsers will just send whatever is in there directly to the display. That means if you're a web designer and you pick #ff0000 on your sRGB display, people using a wide-gamut display will see a much more vibrant shade of red, and everything will look off. In fact, pretty much everyone using a wide-gamut display will see wrong colors everywhere because of this; I have one, and I just forced into sRGB mode because it's so broken. (On the other hand, a lot of people like more vibrant colors, so they think it's a good thing that they get artificial vibrance enhancement on everything they view. And are then disappointed when an application handles colors correctly, and what they see on their monitor are the same boring colors their digital camera saw out in the field.)
But, the problem is not Windows, the problem is applications and specs those applications use. Authors of specs don't want to say "sorry, there is no way you can ever use colors outside of sRGB without some new syntax", so they just break colors completely for everyone. That's why things look terrible on monitors that aren't sRGB; the code was built with the assumption that monitors will always be sRGB. Get rid of that assumption and everything will look correct!
There are also plenty of images out there that don't include color space tags, so it's undefined as to what colors they're actually trying to display. Some software assumes sRGB. Some software assumes the display colorspace. It's inconsistent. (I used to produce drawings in Adobe RGB and upload them to Pixiv, and their algorithm totally gets colorspaces wrong. It will serve your verbatim file to some users, but serve a version of the file to other users with the color tag removed, so that there is no possible way the viewer can see the colors you intended. I gave up on wide gamut and restricted myself to sRGB, because the Internet sucks.)
[1] It gets more complicated for shades of grey, involving gamma correction. #7f7f7f doesn't mean "send half as much electrical power to each subpixel", but rather maps to an arbitrary power level. The idea is to use the bits of the color most efficiently for human viewers; it's easy to tell "0 power" from "0.01% power". (You'll see this in practice when you write some code to control an LED from a microcontroller; if you just use the color as a PWM duty cycle, your images won't be the right colors on the display you just made. Of course, many addressable RGB LEDs do the gamma correction internally, so your naive approach of copying the image pixel values to the addressable LED will actually work. I learned this the hard way when I got addressable LED panels from two separate batches, and the old batch did gamma correction and the new batch didn't. I didn't realize it was gamma at play, so built an apparatus to measure the full colorspace of the LEDs with a spectrophotometer (https://github.com/jrockway/apacal). When I plotted the results, I immediately realized one was linear and the other was gamma-corrected... which meant all the code I wrote to build a 3D LUT for the LEDs was pointless; some simple multiplication was all I needed to make the LEDs look the same. But I digress...)
> The problem is sources of color data that don't have a tag. CSS is a big offender. CSS doesn't specify the colorspace of colors, so typically browsers will just send whatever is in there directly to the display.
CSS defines that all hex and rgb() colors are always in the sRGB colorspace. There is also support for other colorspaces, such as P3 and Adobe RGB. [1]
The Safari web browser does color management for wide gamut displays correctly, so #ff0000 looks correct and not too vibrant. The biggest offenders are Chrome and Firefox, because they are not color managed. Those web browsers (and Windows) give you the wrong colors.
> There are also plenty of images out there that don't include color space tags, so it's undefined as to what colors they're actually trying to display. Some software assumes sRGB.
Images without colorspace tags have been defined to be sRGB images by the web specs [1] and all web browsers should already do that. Other software may do something else, as you said. I hope all software will copy how web does it.
Firefox (and I think Chrome as well) does color management for images, and I think they even do it correctly these days (i.e. no tag = assume sRGB), which they did not for some time (out of the box).
FWIW I think the Windows approach to color management has proven wrong and off-base for today's world. It stems from the 90s where color management was seen as something only "pro" applications would ever need to do, so it was okay to require a lot of effort from those few application developers to implement color management in their apps. The MacOS approach where applications tag their surfaces with one of a few standard color spaces and the system does the rest is less powerful in theory but, on the other hand, means that things will actually work. Plus, I think MacOS has escape hatches so that apps can do their own color management based on output device ICC profiles if they really want to.
The desktop and most windowed apps definitely do use the systemwide color management. The main issue in the past has been whether apps correctly remap image colors based on the image’s tagged (or lack of) profile. It was broken on Macs too but these days isn’t that much of an issue.
So basically until displays advertise their physical specifications to Windows, and the Windows display stack takes advantage of that to auto-calibrate, it cannot match this kind of output.
I wonder what happens if you try this on the LG 5K hooked up to a mac. It is physically the same panel that's in the iMac, so in theory it can present the same range. But if the OS needs to know the exact physical abilities of the display, it might not be able to detect that LG display. Or maybe apple does detect it, because they partnered with LG.
I tried, I have an LG UltraFine 5k connected to an 5k iMac (2019) it works on the iMac but not the LG.
This might explain why I’ve experienced a slight difference when working with photos and video on the iMac monitor vs the LG UF5k. Interestingly I have a small program synchronizing the brightness of my LG with the iMac, the LG would light up ever time I opened the video on the iMac and then come back down on closing. This might explain som weird brightness behaviors I’ve been noticing on the 5k every now and then, I use to decouple the syncing whenever that happens and now I might know why.
I was just going to ask the same question. Everything is pointing to macOS working so well because they own the entire chain. Throwing in the external monitors that Apple promotes would suggest that Apple might also work as smoothly with it as their own. So one more test would be to use a really nice monitor not promoted by Apple.
Not exactly related, but I’ve been using macOS, Windows, and (for the last week) Linux all on the same LG HiDPI screen (4K, but small screen), and no matter how I try to wrangle Windows I just can’t get it to look good. It’s incredibly frustrating when you know how good the monitor can look.
Linux seems better somehow but I haven’t had as much time with it there, so I’m not exactly sure why.
I'm with you on vertical integration. I've been looking at haptic feedback mouses and there are ones from gaming companies but no one has pulled it off well enough to enrich the gaming experience.
In order to pull this off, you need to know exactly how many nits bright the display is, as well as having complete software control of the actual hardware brightness. On Windows, you have neither. Enabling HDR mode completely throws off your current colors and desktop brightness and you have to reset both your physical monitor settings and dial in the new desktop white point with a software slider Microsoft buried in the advanced HDR settings (that almost nobody knows how to use) to hopefully be somewhere in the vicinity of what you had before.
When it comes to display technology, having vertical integration is a huge benefit. Look at high-DPI: state-of-art on Windows in 2020 is nowhere near as good from a software implementation or actual user experience point of view as it was on day 1 when Apple introduced Retina MacBooks back in 2012.