Hacker News new | past | comments | ask | show | jobs | submit login

That's done on purpose. The AI can't easily tell whether the drawing might be intended to be one of a human or a gorilla, so when in doubt it doesn't want to commit either way and just ignores the topic altogether. It's just another example of AI ethics influencing its behavior for alignment purposes.



Context (2018): https://www.wired.com/story/when-it-comes-to-gorillas-google...

"Google promised a fix after its photo-categorization software labeled black people as gorillas in 2015. More than two years later, it hasn't found one."

Companies do seem to have developed greater sensitivity to blind spots with diversity in their datasets, so Parent might not be totally out of line to bring it up.

IBM offloaded their domestic surveillance and facial recognition services following the BLM protests when interest by law enforcement sparked concerns of racial profiling and abuse due in part to low accuracy in higher-melanin subjects, and Apple face unlock famously couldn't tell Asians apart.

It's not outlandish to assume that there's been some special effort made to ensure that datasets and evaluation in newer models don't ignite any more PR threads. That's not claiming Google's classification models have anything to do with OpenAI's multimodal models, just that we know that until relatively recently, models from more than one major US company struggled to correctly identify some individuals as individuals.


It's not that at all... Similar drawings of non-humanoid shapes like an ostrich or the map of Europe would have resulted in the exact same 'blindness'.


This is not obvious to me, and nor should it be to anyone who didn't program these AIs (and it probably shouldn't be obvious even to the people who did). I think you both should try testing the hypothesis and present your results.


What specifically do you base this on? It sounds like conjecture.


You didn’t read the article eh?

You just assumed it’s similar to the Google situation a few years back where they banned their classifier from classifying images as gorillas. It isn’t.


Um, no. That is even't close to the truth. The AI doesn't "want" anything. It's a statistical prediction process, and it certainly has nothing like the self-reflection you are attributing to it. And the heuristic layers on top of LLMs are even less capable of doing what you are claiming.


You mean that whole post could have been written:

"AI can't see gorilla because wokification"?

/s

Edit: Adding /s Thought "wokification" already signalled that.


You didn’t read it either.


I was /s commenting on the comment :) I did read it, could relate to the constant reframing of the questioner to just "look" at the graph. Like talking to a child.




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: