Hacker News new | past | comments | ask | show | jobs | submit login

Perceptual uniformity is in some ways opposite to the linearization suggested above - the L* component of CIELAB is much more like the gamma-encoded values of sRGB than a linear light measure.

It seems tough to come up with hard and fast rules for whether to mimic the linear physical processes, or work in a perceptual space more like the human visual system. I'd love to hear about more rigorous work in this area - most things I read have boiled down to "this way works better on these images".

It's interesting for example that using Sinc-type filters to resize truly linear data, like that from HDR cameras, usually gives rise to horrible dark haloing artifacts around small specular highlights, despite that being the most "physically correct" way to do it. Doing the same operation in a more perceptual space immediately sorts out the problem.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: