Hacker News new | past | comments | ask | show | jobs | submit login

I'm not sure I'd be able to tell it was supposed to be a gorilla specifically, without context.



I think that’s part of the authors point. The article starts out by explaining a human phenomenon and then extending it to LLMs


Humans only failed to spot it when prompted in a way that was misdirective, though.


True. What I think is missing (and probably the more interesting question) is an analysis on _why_ LLMs failed to spot it. I imagine it has something to do with the model architecture.




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: