The fact we still have botched gamma compositing and scaling is one of my personal grievances. In 3D graphics people have mostly learned to use linear light space as otherwise lighting is really wrong, the more the simulation resembles how light really works.
But in, say, Photoshop the gamma is still wrong, unless you switch to 32-bit space, which produces huge files and removes access to most filters. You don't need 32-bit precision for proper gamma. You just need gamma correction when combining/interpolating/blending color values. Alas. Ignorance is pervasive.
BTW, I was a beta tester of Photoshop at Adobe like 20 years ago and I kept pestering them about it. None of the engineers there, including tenured "fellows" could comprehend the problem. Extremely aggravating.
Similar experience at the fruit company ~15 years ago designing a graphics scaling and compositing pipeline for one of the earlier high volume phone SoCs. It was originally designed with (de)gamma blocks before/after the scaling and compositing blocks to do it in linear space but the (de)gamma blocks were removed at the request of a graphics team because "nobody does that" - I recall the justification being that their assets were optimized for the incorrect behavior so they preferred it. I had a copy of Jim Blinn's book Dirty Pixels (aka the Gammasutra) that I would refer people to but it didn't change any minds. I hope they've improved since.
Don't know why I remember this, but the iOS 7 icon for Voice Memos was a good demonstration of doing scaling wrong. It had a lot of fine detail so the gamma-incorrect scaling made for some obvious twinkling effects in zoom effects.
It sounds crappy anyway but, maybe they just did not care? Or the code was a mess and it would be a hassle to implement? I find it really hard to believe that people at freaking Adobe would be unable to comprehend this...
Well sure, many were aware of gamma correction, but everyone's so used with how incorrect compositing looks, that it works "fine" in their mind. And being aware of gamma doesn't mean you truly grok the extent of visual corruption that spreads in images while compositing, scaling, editing etc.
I heard explanations how it'd be slow, or doesn't matter, or what am I talking about, all ultimately variations of "works as coded, won't fix".
There are clearly people at Adobe who are aware of the issue in some capacity, and their efforts have weird specifics, such as... they noticed text anti-aliasing looks weird, so they added an Advanced Control called "Blend Text Colors Using Gamma" option in Color Settings. The bizarre thing is that none of this is specific to text. It's just that text has many small complex shape edges, and you notice it more on text (like Carmack noticed it on the star field). They didn't fix vector art rendering, for example. And the option default is not even the correct gamma for a document, but 1.45 which makes no sense at all. They just "eyed it" and went with it.
There's also a Blend RGB Colors Using Gamma option which is disabled by default. If you enable it, it corrects SOME operations, but not others, with no rhyme or reason. However, you can't really tweak those yourself, because if you do, the settings are not saved per document, they're global. So if you tweak them... then ALL Photoshop files you open will look wrong, because they were originally made for another configuration of these settings. Which leaves you with 32-bit color channels. I have Actions that convert an image to 32-bit to scale, and back to 8/16-bit, just so it's gamma corrected.
John Novak proposes a different hypothesis for Photoshop's choice of gamma=1.42 in text antialiasing:
"Photoshop antialiases text using γ=1.42 by default, and this indeed seems to yield the best looking results (middle image). The reason for this is that most fonts have been designed for gamma-incorrect font rasterizers, hence if you use linear space (correctly), then the fonts will look thinner than they should."
Fonts designed with incorrect gamma rendering also assume specific colors for foreground and background. With incorrect gamma the stems get thicker when rendered black on a white background, and thinner with inverse colors.
So even if a font is designed with incorrect gamma, it might still not be the best to render it with incorrect gamma, as you might use different background and foreground colors to the designer of the font.
AFAIK freetype has the option to enable "stem thickening" to consistently correct for incorrectly designed fonts regardless of colors being used for rendering.
I was also a PS beta tester around that time, and prominent testers such as Stu Maschwitz were actively pushing the PS and After Effects teams to incorporate linear light compositing.
Stu was able to get linear compositing into AE, but I think PS had already evolved into more of a graphics tool than a photo tool. They added 32bpcc and HDR, but it's for specialty workflows, not general use.
Fully embracing linear light would change almost every part of PS—everything from blend modes to the built-in filters were written for perceptually encoded values that don't go over 1.0. Even today, Curves works with 32bpcc, but only shows you the 0–1 portion.
It's not hard to believe, as Photoshop is often broken and is known for massively janky workarounds. Adobe consistently ignores basic features improvement in Photoshop, from the outside it seems like a business decision. Most of the work is redirected into their other products (new brush engine in Fresco, Speedgrade color processing integration into Premiere etc).
It’s called something like “Blend in Gamma: 1.0”. The default is 2.2 of course.
The annoying thing is that it can’t be done per image, so if you chnage it, it breaks the blending of any image you load made by someone who doesn’t have that setting enabled.
I occassionally walk around our studio and audit this setting is set correctly for all our VFX artists because they are the ones who will see incorrect results compared to in-engine if they use the wrong setting here.
Whenever linear RGB / gamma discussions come up, you know there's going to be a lot of folks going on about how frustrating it is that the default for most apps is to do their blending/filters/etc in gamma space (as TFA does).
And they're right. But there's an entire flip side of this coin, called Linear is Not Always The Answer.
Are you dealing with the behavior of light? Then you should be a linear colorspace.
But if you're performing artistic operations or other things that need to look right to a human, this is not what you want. What you actually want is a perceptually-uniform colorspace. Linear RGB is most definitely _not_ perceptually uniform! It's in fact, much worse than sRGB!
sRGB for all its flaws is at least roughly perceptually uniform for grey tones, a kind of "poor man's perceptual space". Color mixing, gradients, etc etc all work much better in a uniform space.
Of course you use gamma so that you have a more perceptually correct colorspace, mostly because 8 bit isn't enough accuracy to represent correctly the various shades of black the eye can percieve. Or because it's easier to make a perceptually linear gradient in gamma space.
But the blending in gamma space looks WRONG. It isn't perceptually correct! Try blending at 50% pure white and pure black and the grey you obtain is the wrong one, far too dark. Surely this cannot be what people want.
When the impulse falls between two pixels, the amount of light should be distributed over those two pixels, so that the total amount emitted stays the same. However, we use a nonlinear encoding to map these physical values to numbers, which is called gamma compression. So in case of an 8-bit encoding, if 1 lumen is mapped to 255, then 0.5 lumen would correspond to a value of say 186, instead of 128. The display hardware uses the reverse of this mapping to emit the intended amount of light from each display pixel, so everything works fine as long as gamma is accounted for in the whole pipeline.
However, if you have a gamma-compressed image, you must be careful when doing any transformations on this image in the digital ___domain, because the pixel values live in a non-linear space, and this must be taken into account when transforming them!
E.g. let's say we have our 0.5 pixel/frame animation from above, encoded with gamma=0.45:
And in post-processing you resize this video to 0.5x of the original size, with software that naïvely blends the pixel values when shrinking the image:
Two adjacent gray pixels (the star is between two pixels) won't have the same total brightness as a single fully bright one (the star is at the center of a single pixel) if the interpolation is not correct with respect to gamma. So, as the star moves slowly, I would expect some twinkling.
If you look in the green section, you can see the background is a checkerboard of (0, 0, 0) black and (0, 255, 0) green. Because of gamma, this is not equivalent to (0, 127, 0).
The 2.6, 2.2, 1.8 sections are supposed to blend with the background if that's the gamma that your image-to-screen pipeline uses. (Including the browser's PNG decoder, the browser's painting and compositing, the desktop's painting and compositing, anything goofy the GPU drivers do, and all the goofy stuff monitors do)
Looks like my cheapo 2010 LCD's gamma changes greatly depending on viewing angle. Ugh :S
Funny thing: When I view that image and zoom in on my 4k monitor it is somehow interpolated. I have a bookmarklet that sets the style `image-rendering: pixelated` and when I click that it suddenly gets much brighter. That interpolation doesn't just look ugly blurry, it corrupts the colors. Maybe instead of using a bookmarklet I should put that into a user style applied to all websites.
While bilinear works best when operating on linear colors, this isn't true of all resampling techniques. Fourier based kernels (sinc, lanczos, jinc) works better on nonlinear colors, as the ringing induced by these filters is exagerrated by linear color spaces.
Recognize when you are doing linear algebra and don’t do linear algebra on non-linear representations.
Gamma encoding is only really necessary to optimize around the limitations of 8-bit color channels. So, load sRGB888 convert to linear 32-bit floats, do linear algebra, convert back to sRGB888, store the results.
This is how color workflows work in film and tv. I imagine the issue is his television (despite him blaming the stream and his follow-up Netflix anecdote). Netflix has delivery specs that specifically outline these issues and require you to go through a workflow that prevents them (though people make mistakes and QC misses things).
I've honestly started questioning if the gamma representation was even worth it. Does using 8 bit linear really lose out on so much fidelity? I guess to test that you would need a display capable of showing 10 bit colour or maybe a CRT with adjustable gamma down to 1.
A gamma of 2.2 puts that '15' at 0.2% brightness, and the '20' at 0.4% brightness.
An 8 bit linear representation would make that '20' square the minimum brightness above zero. The next step up would be roughly the '30' square.
So yes, the gamma curve is very necessary. Even 12 bit linear would be a bad idea. So 14 or 16 minimum. And adding HDR is something like 2 bits with gamma and 8 bits without.
The man is clearly smart, but it's posts that this that show he had no business in a leadership role at Facebook. When he left, he cited low GPU utilization as an "offensive" metric. These skills were great for writing early 3D game engines, but not for developing consumer VR.
I'm frankly curious about the thought train that led you from "Carmack makes a technical post about gamma correction" to "he can't lead in a tech company"
He misses the forest for the trees. It's not this one case, but it's a pattern with him. He's not wrong about these things, but when he calls them out rather than calling out more meaningful, high-level issues.
They were only using 5% capacity of their datacenter GPUs. They could have trained their deep learning models close to 20X more, etc. He mainly just said it was a signal of their inefficiency as an organization and was a final straw, not a sole reason. He detailed many other major misallocations of resources throughout his time there.
But in, say, Photoshop the gamma is still wrong, unless you switch to 32-bit space, which produces huge files and removes access to most filters. You don't need 32-bit precision for proper gamma. You just need gamma correction when combining/interpolating/blending color values. Alas. Ignorance is pervasive.