1. Scaling down in linear colorspace is essential. One example is [1], where [2] is sRGB and [3] is linear. There are some canary images too [4].
2. Plain bicubic filtering is not good anymore. EWA (Elliptical Weighted Averaging) filtering by Nicolas Robidoux produces much better results [5].
3. Using default JPEG quantization tables at quality 75 is not good anymore. That's what people referring as horrible compression. MozJPEG [6] is a much better alternative. With edge detection and quality assessment, it's even better.
4. You have to realize that 8-bit wide-gamut photographs will show noticeable banding on sRGB devices. Here's my attempt [7] to reveal the issue using sRGB as a wider gamut colorspace.
"When we started this project, none of us at IG were deep experts in color."
This is pretty astonishing to me. I always thought applying smartly designed color filters to pictures was basically Instagram's entire value proposition. In particular, I would have thought that designing filters to emulate the look and feel of various old films would have taken some fairly involved imaging knowledge. How did Instagram get so far without color experts?
In the "How I Built This" podcast for Instagram, Kevin Systrom specifically says the filters were created to take the relatively low-quality photos early smartphone cameras were capable of and making them look like photographer-quality photos. Filters were about taking average photos and making them "pop" but not necessarily by virtue of having deep ___domain knowledge of color.
I was never under this impression. I was always under the impression the filters are just created by a designer playing around with various effects until it looked nice.
Because it isn't complicated or novel to make compressed 8-bit jpegs have color filters. There are tools for the job and they've been around for a long time.
Working in a different color space than standard requires a little bit of familiarity and finesse that modifying 8-bit jpegs for consumption on the internet did not require.
Many photographers and printers are familiar with this dilemma in a variety of circumstances, where the cameras create images in a different color space and higher bit depth that can't be perceived at all with any technology or the human eye.
I'm sure the comment you're replying to wasn't thinking of the algorithm that applies a filter to a jpeg, but the process by which that filter is created in the first place. The assumption being that there's some sort of theory to colour that allows you to systematically improve the aesthetic qualities of images.
The creative process isn't novel. There isn't even the capability of any layer masking in most mobile apps, including Instagram, compared to pre-existing more robust tools on desktop (and other mobile apps), severely limiting the 'technical interestingness' to begin with.
Bit of a jump in topic, but I'm kind of curious: I don't use instagram myself, bugt I'm sure it resizes images for online viewing, saving bandwidth ans such. Does it do so with correct gamma[0]? Since that's a thing many image-related applications get wrong.
Not necessarily on topic - but given your code samples are written in Objective-C, how much of your codebase is still in Objective-C and what's your opinion on porting it over to Swift?
Good q. All of it is still Objective C and C++; there's a few blockers to starting to move to Swift, including the relatively large amount of custom build + development tooling that we and FB have in place.
It's getting used more and more in our app--few recent examples are the "Promote Post" UI if you're a business account and want to promote a post from inside Instagram, the Saved Posts feature, and the comment moderation tools we now provide around comment filtering.
Peculiar this is being downvoted - compiler speed / reliability / support for incremental builds are all issues with large Swift projects. I've even see people go as far as splitting up their project into frameworks in order to bring build times below times as large as 10m and restore incremental compilation.
I know there's not much love for the API these days, but is / will there be a way to access wide color images from the API? iPads and Macbook Pros also have wide color screens nowadays, so it would make sense to display them for specific use-cases in third party clients.
Great article. Question not directly related to your writeup: in your casual usage of a phone/photo app with the new, wider color space, do you notice a difference in your experience of using the app/looking at images? Or, in other words, does the wider color space feel at all different?
Photos only. Apple's APIs only capture in Wide Color when shooting in photo mode, and their documentation only recommends using wide color/Display P3 for images.
Just P3, though the wider gamut available in our graphics operations should benefit photos brought in using Adobe RGB too since iOS is fully color managed.
Good to know--I didn't run it through my Pixel. Some devices will do a relative projection from Display P3 to sRGB, which means that it will look "relatively" like it would in Display P3 but projected onto the sRGB color space.
Edited to add: and some other ones are doing something even less fancy, which is just to ignore the color profile entirely and just assume sRGB and display it incorrectly, taking for example what would have been the maximum red point for Display-P3 and making it the maximum red point in sRGB.
Since you brought up your Pixel: what is the point of adding something 1% of your customers maybe can see instead fixing that horrible horrible compression that makes uploads from android (80% world wide market share) look like crap?
(Not an android user, I just want to figure out how a company of your size prioritises between bugs and features.)
I can't speak for OP but say this does effect 1% of users today, what percentage does it effect in 6 months, or a year? Not bad to be proactive.
And regarding android compression issues, although resources are always finite, I imagine in this case the android team is fairly separate, so they may very well be working on that compression issue while iOS is pushing forward into new terrain.
> I imagine in this case the android team is fairly separate
This is likely it, right here. So many people forget that larger companies have different teams working on different things. I bet a lot of their "iOS people" that are working on this project have no clue how the Android app works, and Instagram likely has a separate team working on the compression issues.
I don't use IG, so I wasn't aware of that problem or how long it had been around. That said, the general sentiment stands: one of their teams working on one thing doesn't show that they don't have another team working on something unrelated.
Well, given how Instagram has treated its android users can you blame them?
I've seen a number of SV companies releasing ugly and buggy android apss then use their lowering android user base as a proof that android users use like their services.
To be honest, things could be worse. You could be a tinder user in win phone...
Yes, was a bit confused when I opened the image on my ancient work computer (Windows, old Dell monitor) and could see the logo clearly. Interestingly, Chrome refused to display the image despite it nominally being a PNG, and I had to open it via an external program. Dragging the window to my second monitor (different model) causes the logo to vanish (though I do feel like I see it for a split second on the second monitor... optical illusion?).
Chrome should be able to display it fine. The dl=1 param on the Dropbox URL means the request is served with a response header indicating it's a download. Change the link to dl=0 and it won't force the download.
By the way, anyone who can display it (all my nice monitors aren't with me at the moment) can you see it on this imgur link? http://i.imgur.com/qCna54M.png
imgur shows straight red on my 2015 Macbook Pro, but my iPhone 7 shows the logo.
The dropbox thumbnail link (with ?dl=0) shows the logo. Opening the link with ?dl=1 in Preview shows straight red. I think Dropbox is doing some weird thumbnail processing.
You are probably seeing the color management of monitor 1 in effect on monitor 2 while the window is less than halfway. Once it goes above half then monitor 2's color management takes over fully and it disappears.
I don't think that computer has a wide-color display. The 2016 MBP should (at least, the one with the touchbar, not sure about the one without), and the retina iMac supports it as well.
I couldn't either on my MBP late 2013 until I changed the Display Profile to "Display P3". There are other profiles that will show the logo, too, like "Adobe RGB (1998)"
If you change your display profile like this you're going to have incorrect color rendering, so don't forget to change it back after you've had your fun looking at the (inaccurately-rendered) image.
I have a 2016 MBP w/ touchbar, and depending on which color display profile I select in System Preferences, sometimes I can see it, and other times I can't.
Windows I can see it if using XnView but I can't see it if I use the built in image viewers. It's an older computer so it's XnView converting to sRGB most likely.
This comment explains it (at least for Webkit's test image, which is probably similar to Instagram's).
>
No, it is a bad test.
It is an image with an ICC tag that indicates it uses a color space larger than sRGB. The image data has the logo using color that should be outside the sRGB color space, but it still uses 8 or 16 bits to store that data.
Android doesn't have color management. Android basically assumes all images are sRGB, so you see the logo.
iOS does have color management. iOS sees the ICC profile and interprets the image data so that if you do not have a display that could show you the different reds in the image, it doesn't display them.
So we have everyone in this thread on Android thinking they have a wide color display. Most of their displays aren't even 100% sRGB. My Nexus 4 shows the logo. It is very much not a wide-color display.
I used the same approach as the Webkit image, so the same applies here, too (it's also why we only serve Display P3 photos to iOS clients with wide color screens, most Android devices would treat them incorrectly)
I'm not sure if this is different in the iPhone 7, but the 6S is pretty terrible at color representation in photos, specifically when dealing with neon lights. The iPhone tends to overexpose the neon light to make up for the surrounding being darker, so to get a 'decent' shot, I have to down the exposure by about 2/3 on the 'brightness' in the normal Camera app. But in general, it has a hard time showing as good of color as you would be able to capture in a better camera. For example, I've taken some photos of museum paintings during a visit, and the colors tends to be a little darker and not truly representative (yellows appear like mustard rather than a brighter yellow, for example). I'd love to be able to take more color accurate photos, and it would be more worth it to get the iPhone 7 if that's the case.
That's what I figured. I hope that the sensor becomes better and better, so that one day I can take great neon light photos (as well as see the correct color of red).
I did not know that Instagram was using OpenGL for processing. That's pretty neat, given the capabilities of OpenGL. I'm looking forward to seeing more filters with more lifelike Polaroid effects.
But before that, could they not convert images to horribly encoded JPEGs? I get it, bandwidth and costs, but it's an image service that still drowns in its own... when it gets an image with strong reds and blues.
We started using OpenGL in 2011. Our CPU-based image filters used to take 4+ seconds with pixel-by-pixel manipulation, and now can render 30+ times per second.
If you have some sample images where the current image pipeline is going wrong let me know and we can look into improving.
I also went through that process with my app Picfx. Using OpenGL for filters is much quicker, the only downside I've found is being limited by the texture size, I did set up a way to process images in tiles but ultimately decided to just limit images to the texture size. Great info on the colour space, I'm sure it will be useful.
Instead of fixing the horrible JPEG encoding can you please add support for webp? It's quite a bit smaller and well supported with polyfills since it's just a single vp8 frame
You don't need a polyfill to deploy Webp. Chrome automatically sends webp in the Accept headers for images, so on the CDN level you could implement some logic to seamlessly swap in Webp images for the browsers that support it. Imgix does this for instance.
Really strange! I get the same behavior. I thought it was maybe Chrome vs my image viewer, but if I open the downloaded image with Chrome, it's just pure red, but the preview in Dropbox I definitely (faintly) see the logo.
If I right click and save the dropbox preview, I get a 14kb image, but the downloaded image is 29kb. As far as I can tell they're both PNGs with the same bit depth.
Interesting! From this link, I also see the logo on my rMBP (in Chrome or Safari) – but not in the IMGUR link I posted above, even though the IMGUR link reveals the logo on an iPhone7 iOS/10 MobileSafari.
Changing Chrome/OSX to report its User-Agent as iOS9 MobileSafari does not help get a different image from IMGUR.
This is interesting. Shows up in Firefox here, but not in Windows built in Picture viewer. My display is certainly only sRGB so I guess Firefox is doing some kind of correction.
I've written lots of graphics and image processing code.
You can see the logo because almost every computer system on the planet handles color spaces incorrectly. Apple's devices are actually better than most, though third party drivers such as those for printers can sabotage their color handling.
The canary image will appear as red without a logo on a computer with an sRGB display if that computer correctly handles color spaces throughout the whole imagine pipeline. That's a lot of ifs.
If your system ignores color spaces, you will see the logo because the Display P3 (DP3) color space gets compressed into sRGB. When you look at real world DP3 images on this system, you will see the reds as being more muted. The same thing happens if you use an Adobe RGB camera (there are lots of these) and display it in sRGB, except with the green channel, because AdobeRGB has a wider green range.
No matter which color space you use, an image will contain RGB tuples. The color space is additional meta-info which says how to interpret those tuples. Lots of software will ignore the metadata and simply assume the RGB tuples are used in the same way as it expects.
I think "incorrectly" is a strong assertion to make, when it's behaviour that most users are actually accustomed to and expect.
I guess you could think of it somewhat like the difference between clipping an image larger than the monitor's resolution or scaling it to fit. In the former case you preserve the accuracy of individual pixels within the area that fits, but discard the information outside; and in the latter, you lose accuracy of individual pixels but preserve being able to see (an approximation of) the whole image. Applying this to colour spaces, "clipping" DP3 to sRGB preserves the "absolute" colour information but discards the "relative" differences (hence not being able to see the logo), while scaling discards the absolute colour (I think this is what you mean by "reds as being more muted") but preserves the differences (being able to see the logo).
Since a user looking at a monitor derives most of his/her information from the contrast between pixel's colours, I'd say discarding that contrast is the real "incorrect" choice most of the time. DP3 images scaled onto an sRGB monitor certainly won't look as good as on a DP3 one, but at least the user will still be able to resolve the fine detail that relies on differences in pixel values. Besides, getting absolute color accuracy on a monitor has always been nearly impossible in a non-specialised context since it depends so much on things like external lighting.
So... correct me if I'm wrong, but this DP3 color space they're using isn't increasing the bit-ness of the color, it's still 8 bit color, they're just using a different color space to get a wider range of color with less precision?
Seems sort of silly to me as most designers will be on sRGB displays and most people will be used to how images look in the sRGB space, but I guess it's one more way for Apple to sell more new Apple stuff by pretending these extremes in color are more important than precision in other parts of the spectrum.
I can definitely understand going to 10-bit color, this, not so much.
They're not mutually exclusive. Right now basically every single display is 8bpp even if Apple went to 16bpp displays when using normal applications most people wouldn't see anything special because the source data is all still based on eight bits per pixel.
By improving the color gamut you can actually see a difference on the display. Areas where there were differences and color before but it was invisible because of the display now showing actual difference. It's slight, but it's there.
Seems like a good move to me. I imagine moving to 10 or 12 bit color will be the next step.
Last I checked, the browsers vary in their support for color management. Try a quick test at a page like this before drawing any conclusions from the canary image:
If you have a Macbook and can't see the logo on the canary image, try changing the color profile for your display. On my mid-2014 rMBP, the default color profile was "Color LCD", but I could change it to "Display P3" and see the logo.
To make the change: System Preferences -> Displays -> Color
Thanks for the hint. On my 2015 Retina MBP, when opening the IG image in Preview, I see the IG logo. However, when dragging the image into Chrome, I do not see it.
Are browsers generally color-correct? I was going through the Webkit wide color examples and was getting different visual results on different browsers[1]:
Browsers are all over the place, unfortunately. It's part of why sRGB because the only reasonable color profile for Web use. I think we'll see wide color become common in apps before the Web.
All over the place in what way? Support for different color profiles? Actually handling color spaces at all? The fact that there's no consistency when it comes to untagged images? The mess which is plugins? The ability to specific CSS colors in a specific color space?
When on Android? I also opened a bug on Android Bugtracker: https://code.google.com/p/android/issues/detail?id=225281 and it is marked as a small issue. On Android we are lacking color management and are only restricted to sRGB. Original Wide Gamut and P3 images look dull on Android devices. I also want good wide P3 colors and color management without buying an iPhone.
Another things worth mentioning is that lots of professional photo & graphics people have been using the Adobe RGB color space for almost 20 years which is "wider" than sRGB.
We built this in already! We don't have a "1x" or "2x" indicator, but the dual lens camera is fully used in Instagram now and will do the smart transition between 1x>2x optical and 2x+ digital zoom.
Not trolling, I care an incredible amount about color spaces, and had expected support for something like the UHDTV Rec.2020 color space with D65 (cool/blue white), not an obscure Adobe standard with D50 (warm/yellow white). It's wider than Rec.2020 though, so if this is what you're talking about: awesome, please update your article so people can find out more about this color model and the rest of the world can catch up!