Hacker News new | past | comments | ask | show | jobs | submit login

Given that o3 is trained on the contents of the Internet, and the answers to all these chess problems are almost certainly on the Internet in multiple places, in a sense it has been weakly trained on this content. The question for me becomes: is the LLM doing better on these problems because it’s improving in reasoning, or is it simply improving in information retrieval.





And then there's the further question of where we draw the line in ourselves. One of my teachers -- a philosopher -- once said that real, actual thought is incredibly rare. He's a world-renowned expert but says he can count on one hand the number of times in his life that he felt he was thinking rather than remembering and reorganizing what he already knew.

That's not to say "are you remembering or reasoning" means the same thing when applied to humans vs when it's applied to LLMs.


> One of my teachers -- a philosopher -- once said that real, actual thought is incredibly rare

Probably should listen to psychologists and neuroscientists about this, not philosophers tbh


When it comes to describing what the brain physically does, sure. When it comes to naming and interpreting, the philosophers are the ones to trust.



Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: