Gamut matters more than accuracy here. If the monitor is skewed blue, or has darker reds, or something like that, the test remains valid. In general, asking for more blue will lead to a bluer pixel.
The exception is when a monitor is already maxed in blue, but that is about the gamut (range) of the monitor, not its accuracy.
I think those colors have been chosen very carefully because I have 0 issues rearranging them on poor quality TFT display of macbook air 11. The seem quite distinct to me.
In particular, I suspect the DACs and gamma buffers used to derive the LCD driving voltages may have some non-monotonic effects, which would definitely affect the test --- i.e. if increasing the numerical R, G, or B value actually decreases the corresponding colour intensity, even slightly, of the light emitted.
I did this a few years ago on a work LCD monitor. It was much more difficult than on a U2410 at home, on which it was mostly pretty easy. But even that was more challenging than on a GDM-FW900, on which the colors and the gradient were crystal clear, with no need to do any swapping.
Or the settings of the monitor. I did it in "reader" mode and got a score of 19 struggling as a few boxes looked the same. Switched to "photo" mode and got a score of 0 easily.
It's not a matter of accuracy, but precision i.e. low delta-E doesn't matter here. Having bad greyscale tracking / non-flat gamma are what would primarily make this difficult but almost any modern screen is pretty good in these areas.
> Alternatively: how good is your monitor's colour accuracy?
Mine is a terrible decade old TN panel with banding on most coloured gradients... I got a perfect score so it doesn't seem to measure that much fidelity.
This particular color test is tricky because it's not a simple linear progression. After finishing the test, I converted the RGB values to HSV and found irregular jumps in hue (sometimes 3 degrees, sometimes 6) along with nonlinear changes in saturation and value levels. The irregularity probably makes it a better test, but it's annoying because when it's "perfect" it still looks ragged. :-)
> The irregularity probably makes it a better test, but it's annoying
This made me take a look at what they're doing and the sRGB vals map fairly close to their Munsell counterparts. The value has been fixed which is nice for this kind of test, but yeah non-linear hue progression. Probably helps as a discernment test considering how much closer the center colors are to each other.
Yeah, that was lame. I mean the test was mildly interesting, but they could have provided a few more stats and not just the min and max for my gender. Maybe if they had decided to make a graph they would have thought about form validation and acquired useful results.
> swapping pairs clearly shows what’s wrong. It is like a bubblesorting by hand.
I found exactly the same, looking at the overall gradient only gets you so far. I wonder if the magnitude of the statistic of 1 in 255 women and 1 in 12 men have more to with how many people figure out how to effectively sort things without being explicitly told than it has to do with colour - perhaps if an equivalent subtle sorting test was done with something other than colour it would reveal a similar statistic.
The difference is definitely genetic. Colorblindness is generally caused by recessive genes that are part of the X chromosome. That means that the proportion of men who are colorblind is basically equal to the proportion of X chromosomes with one of these recessive genes. Women have two sets, so they need the gene on both chromosomes in order to exhibit colorblindness. This means that their proportion of colorblindness is roughly the square of that of men (actually less because there are different types of colorblindness).
Yes but i'm talking about the statistic having a potentially invalid baseline because of the method of testing gives some people an advantage separate from physical ability to perceive colour depending on how they complete the test.
Could we not also correct by simply telling the participants this technique (I used it too and have the same experience of only being 100% sure after bubble sort)
the trick for me was to compare i and [i + 2] rather than i and [i + 1]. If you're not sure which card is which, rather than swapping them, move one of them so there's a card between it and the other card.
I got 2 as well, but I also got frustrated when moving tiles didn't seem to register or something. I tried again, and it definitely seems like there are tiles out of place and I try to change them but they snap back sometimes.
In some cases moving one tile over another didn't work but it did work in the opposite direction. Definitely made the test a bit more difficult because it wasn't always obvious when a change had or hadn't worked.
Me as well. Funny thing is that with a non-calibrated monitor, I scored better than everyone in my art department including the art director... and I'm just a developer who knows a fringe level of Photoshop / Illustrator.
I scored 0 on a phone in quick time, and many other people here are reporting perfect scores as well. The fact that the scores of your art dpt are noticeably lower is thought-provoking.
" Best Score for your Gender -2147483648
Worst Score for your Gender 2147483647"
Well ok then, I guess my score is about in the middle. But I think maybe they have something going on that's not quite what they intended.
They mention that 0 is the perfect score, though I'm not sure how the highest and lowest they mention relate to one another (let alone how someones score could be that far off), but I also got a score of 2 and also in Night Shift mode. My weakest area was 17 (a greenish-teal like slice of the spectrum), which would make some sense in how yellow/orange relate to the colors green and blue.
It would be interesting if they factored in time spent.
I'd also be interested to know if people have different strategies for solving these. I first arranged them into left and right halves, then went through and tweaked the order until I was happy. Toward the end, I did a couple reversals just to see if it looked better or worse. In all cases it looked worse and I switched it back. Final score was 0.
Just did a rough ordering and then went through swapping each to see if that improved matters, and ended at a score of 0. I wonder if years of amateur photography and editing has helped, similar to grinding through IQ tests.
I'm curious what you'd be looking for in time spent. It seems to me that it's just a true or false. I personally did it intermittently while I was gaming on my other monitor. If anything, I'd say adding a confidence weighting on each color scale would gather more data.
I did basically the same, and also ended up with 0. Likewise I didn't end up finding any problems swapping adjacent blocks, but it did make it very clear that the order was correct.
To me, this feels like a clever ad for their products, where most people will feel good because they get perfect score if not colorblind and will want to read will being receptive. Usage of term "IQ" may further flatter ego and attract people to do the test.
I admit enjoying doing the test. I think I would have liked something like "ah ah, you were wrong for these colors!".
Well, that was different, haven't actually seen one of these. Thanks for sharing.
Ps:
Best Score for your Gender -2147483648
Worst Score for your Gender 2147483647
I got a perfect score. I don't know if my color differentiating ability is particularly great. I think a good algorithm is: put them in roughly the right order and then (2) do bubble sort.
I find it a little odd that they didn't mention that as being one of the main factors that can affect the outcome of the test.