Hacker News new | past | comments | ask | show | jobs | submit login

The NASA images of space are colored black and white images, if I'm not mistaken. So artists are involved in NASA's images as well it seems.



Most scientific astronomical images are based on capture of specific bandwidths and frequencies, which are then assigned colours for visual interpretation.

Given that many of the frequencies are beyond the limits of human vision (ifra-red, microwave, and radio, at the low end, ultraviolet, x-ray, and gamma-ray at the high), this is somewhat out of necessity.

The individual channels largely record intensity, so in that regard they're similar to B&W photographs, but the intensities are of specific bands, rather than a wide range of bands as in silver-halide based photographs. There may be some frequency variation recorded and interpreted as well, I'm not certain of this.

That said, I'm pretty sure that there's also an eye to aesthetic and public appeal of landmark released images.

This link shows and discusses multi-spectral images, showing the channels individually for several objects:

https://ecuip.lib.uchicago.edu/multiwavelength-astronomy/ast...

The Crab Nebula at frequncies from radio to gamma: https://upload.wikimedia.org/wikipedia/commons/thumb/6/6b/Cr...

Galaxy Centaurus-A image which includes infrared and x-ray channels: https://apod.nasa.gov/apod/ap210117.html

Today's APOD shows the Ring Nebula in infrared, red, and visible light: https://apod.nasa.gov/apod/ap210818.html

You can find such examples by searching for different EMR aspects, e.g., "radio, ultraviolet, x-ray". These are typically noted in APOD's image descriptions.


You are correct. NASA employs artists to generate published press images.

The individual pictures sent by the spacecraft are indeed monochrome images taken with different filters. These filters don't usually correspond with RGB and calibration is required to approximate human perception.

Most of the time there's no equivalent at all (this applies to pretty much all deep space imagery) and the colours are basically made up (the intensities aren't - they correspond to frequency responses, just not necessarily within the human vision spectrum).

Here's a discussion on the topic: http://ivc.lib.rochester.edu/pretty-pictures-the-use-of-fals...


I'm still conflicted. The article you linked goes deep into philosophy of knowledge, but still arguably misses the most important point - false color can be a tool for analysis, pretty pictures are entertaining (and/or good marketing), but neither of them let us - individuals - get closer to the phenomena.

When I look at photos of distant places on Earth, I do so because I'm trying to imagine how would they look like if I was actually there. This is the main reason people care about photos in general[0] - they're means to capture and share an experience. As a non-astronomer, when I'm looking for photos of things in space, I also want to know - first and foremost - how would these things look to my own eyes if I was close enough to see them.

The article mentions NASA defending their image manipulation as popular astronomy's equivalent of red-eye removal. But it's not that. Red-eye removal exists to correct for the difference between a flash-equipped camera system and human eyes. It's meant to manipulate the image strictly to make it closer to what a real human would've seen on the scene. Where is astronomy's actual equivalent of that? Pictures manipulated in such a way to make them maximally close to what an astronaut in a spacesuit would see, if they were hanging around the astronomical object in question? That's what I'd like to see.

Such pictures will not be as exciting as the propped up marketing shoots, but they'll at least allow the viewers to calibrate their understanding of space with reality. I would think this would be seen as more important in the truth-seeking endeavor of science.

--

[0] - Marketing notwithstanding - the reason imagery is so useful in advertising is because of this desire; carefully retouched and recomposed pictures are superstimuli, the way sugar is for our sense of taste.


> [...] what an astronaut in a spacesuit would see, if they were hanging around the astronomical object in question? That's what I'd like to see.

Well, sorry to disappoint you there but that's technically impossible. For most deep space images human eyes would likely see nothing at all (because we cannot perceive the frequency bands depicted). In other cases it's hard to tell because light conditions are vastly different from our daily perception (lack of atmosphere, harsh contrasts, very little sunlight) and the sensors collected photons for hours - something that human eyes just can't do.

One problem is that the equipment used is simply incapable of recording images as human eyes would see them - try to snap a picture of the evening sky with a smartphone camera to see this first hand (unless your newfangled device sports AI image enhancement, in which case it's just as fake [0]).

> I would think this would be seen as more important in the truth-seeking endeavor of science.

Most scientists see this "truth" as something objective though, and that's strictly not what human perception is in the first place. [0] has a RAW image of a smartphone next to an AI's interpretation and that's a good analogy to what happens with scientific data as well. Neither of the two versions matches what a person would see, yet we accept the AI version as being "good", even though it neither is depicting what the camera sensor picked up nor showing what a person would have seen.

So what is "the truth" in this case? Is it the grainy, desaturated and dark raw data from the CCD sensors or the artificial construct that tries to mimic what a human observer might have perceived?

[0] https://www.washingtonpost.com/technology/2018/11/14/your-sm...


>For most deep space images human eyes would likely see nothing at all

Depending on the definition of "deep".

Presumably in close orbit around a planet, a person would see something. In fact, I've read that even on Pluto, sunlight at noon is still much brighter than moonlight on Earth.

And considering how far away Orion is, what if someone was ten times closer? Is it unthinkable that it might be comparable to the Milky Way?


An aspect of this I find fascinating is the different ways photos from Mars are coloured.

The cameras they have sent are quite good and can take 'true colour' images.

From the raw images they can be coloured as if a human was standing on Mars, but more often are coloured as if that part of Mars was on Earth. The reason for this is so that scientists can use their Earth-adjusted eyes to study the images - you want minerals in the photo to look familiar to the Earth scientist.


> Where is astronomy's actual equivalent of that?

Removing StarLink satellites? ducks


Nah, many space probes have colour camera too. Here are some images from ISRO's Mars Orbiter : https://www.isro.gov.in/pslv-c25-mars-orbiter-mission/pictur...


These aren't colour images - they're all coloured after the fact.

What the sensors do is to take images in multiple wavelengths. Each individual channel is still monochrome and "real colour"-images are produced by interpreting them as red, green and blue with various weights depending on calibration.

If you look at the actual sensor wavelengths, you'll notice that they don't cover the same frequency spectrum as RGB sensors or human eyes. It's all interpretation and calibration (that's why the Mars rover have a colour calibration plate on them).

Another method (often used with astrophotography) is using a single monochrome sensor and put RGB filters in front of it, take separate pictures with each filter and recombine them (done on Earth).

You can see the frequency bands of the filters used in MERs Spirit and Opportunity here: https://mars.nasa.gov/mer/gallery/edr_filename_key.html

(the rovers used 16 different filters, so no RGB there)


A smartphone camera, DSLR or other camera does not work any differently though. It's just that an array of color filters is fixed above the sensor's pixels which would also be monochrome if it were not there. The readout of these pixels is automatically converted to a color image then, or stored as RAW for later processing.

This color filter array is called a "Bayer filter".

https://en.wikipedia.org/wiki/Bayer_filter


I'd say there's still a difference in that while DSLRs and smartphone sensors have individual readouts for each colour channel, scientific instruments do not. So a DSLR or smartphone camera still outputs a multichannel image with a single sensor while scientific instruments are strictly monochrome.

They don't take multi-channel pictures in a single shot and each full image represents a single channel. HST for example has three cameras - one per channel - while the MER cameras have a single sensor with multiple filters.


From what I know sensors do not have different readouts for the different color channels. All pixels are read out the same way and, with knowledge of the pattern of the Bayer filter in use, are then processed ("de-mosaiced") into a color image.

If one removed the Bayer filter from a sensor (as difficult as that is) and skipped demosaicing while processing the image, one would end up with a monochrome image just fine. Using physical color filters for different channels and taking the same scene for each of them, one could create a single color image with even better resolution than the camera normally would. Why better? - this might disappoint you now! - consumer cameras count each differently colored 'sub'pixel as a full pixel. So half of the "megapixels" of your camera are e.g. green and just a quarter of them each red and blue (that's a common ratio for the colors). The camera then 'fakes' having a full set of RGB pixels by demosaicing, see https://en.wikipedia.org/wiki/Demosaicing ). Taking monochrome image with physical color filters would allow to use the full resolution of the camera for each color channel.


That's exactly what I wrote, though.

> From what I know sensors do not have different readouts for the different color channels.

You know wrong then. HST uses three camera sensors precisely to have three different filters at the same time and at full resolution: https://www.stsci.edu/hst/instrumentation/legacy/wfpc2

So yes, the WF/PC2 does indeed have different readouts per channels because each channel has its own dedicated sensor.

Maybe you simply misinterpreted what I wrote. The fact of the matter is that consumer level sensors return multi-channel images (the sub-pixel filtering is an irrelevant technical detail), while scientific instruments use dedicated sensors per channel.

edit: just to clarify even further - there are no subpixel shenanigans with scientific instruments for both higher precision and to avoid cross-talk between frequency bands; hence separate sensors per frequency filter. if you want multiple channels at once (e.g. space telescopes), you therefore need multiple sensors as opposed subpixel filtering.


OK, I think I understand the confusion now. I understood "readout" as hardware implementation detail (think: "readout circuit") while it seems to refer to the data of a particular channel, right? My lack of familiarity with the lingo seems to blame then.

That "subpixel shenanigans" on scientific apparatuses are inacceptable should be understood.


What you are saying doesn’t make much sense. It’s obvious that the images are taken on different wave lengths and recombined. Cameras with a Bayer filter use exactly the same principle but have worse spatial resolution because they have micro-filters on each pixel and the result is interpolated. Foveon sensors work in the same way with different layers sensible to a specific wave length. Even our eyes work in the same way with the rods and cones that are sensible to luminance and different wavelengths. By your definition what we see are not colour images since they are processed after the fact by our brain from the separate wave length signals coming from the receptors in our eyes.


You misinterpreted what I wrote then.

Yes, a consumer-level camera sensor uses Bayer filtering, but that's just an implementation detail just like the rods and cones inside the retina - you don't perceive each channel individually after all, lest you wouldn't be able to perceive "brown" or "pink".

So any analogy between CCD sensor output and human eyes ends right there.

The difference lies in the output of the sensory mechanism (be that human eyes or CCD sensors). While consumer level hardware outputs multichannel images (yes, by means of subpixels and Bayer filtering, but that's irrelevant - for the sake of argument it could as well be pixies), scientific instruments use separate CCD sensors for each recorded channel to a) keep the full resolution and b) avoid cross-talk between different frequency bands , which is unavoidable with the subpixel setup used in commodity hardware.

So while your smartphone or DSLR's RAW output will be a multichannel image, instruments onboard of spacecraft will only ever output a single channel per sensor. No subpixels, no Bayer filters. Either one dedicated sensor per recorded channel (e.g. HST WF/PC2 camera) or multiple images taken in series using a single sensor but multiple filters (e.g. cameras onboard MER and MSL rovers).

I hope that was a little clearer.


The Foveon sensor that I mentioned before works exactly in the same way getting 3 full resolution images at different wave lengths, exactly like the scientific cameras. And I’d argue that the only difference between this approach and the eye/Bayer sensor is the resolution, the underlying mechanism is exactly the same.


But does the Foveon sensor use RGB filters or does it work with different frequency bands? That's a major issue with science data - it doesn't contain RGB data for the most part, because that's not what scientists are interested in.

What's the correct colour channel to use for for near UV (yes, I know that some people - mainly women - can perceive that, but the vast majority of people can't)? What matches near IR or even X-rays? How would you colourise the frequency bands that match different elements (those include very narrow bandpass filters) but don't really make sense in terms of RGB?

That's the difference right there. If you have a picture that uses channels of different filter bands that correspond to hydrogen bands, organic molecules or UV radiation, none of that data correlates to human colour perception. So someone just chose shades of green for hydrogen or shades of blue for organics.

So what I'm trying to say is that the underlying mechanism is irrelevant - the data simply isn't colour data as seen by humans.


I'd argue if you have sensors that take images in multiple wavelengths, and they are combined, it amounts to a imaging system (=camera) that senses colors - vs upthread argument of "colored black and white images".


The issue is that the multi-wavelength combined image doesn't correspond to RGB like the RAW output of DSLR or smartphone camera.

The different bands received from spacecraft, rovers and (space-)telescopes are matched with red, green and blue, even though they're actually taken in near infrared, UV, or even X-ray spectra.

The "truth" is that the images are only monochrome representations of recorded photons of different energies. Most people don't see near UV light for example and some images are taken in very narrow bands and then "stretched" into "full" RGB, etc.

The point is that the recorded data doesn't correspond to the human vision apparatus (or at least not well) an thus any multi-colour representations are necessarily fabricated and sometimes don't even try to reflect what a human would have seen.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: