Given that o3 is trained on the contents of the Internet, and the answers to all these chess problems are almost certainly on the Internet in multiple places, in a sense it has been weakly trained on this content. The question for me becomes: is the LLM doing better on these problems because itβs improving in reasoning, or is it simply improving in information retrieval.
And then there's the further question of where we draw the line in ourselves. One of my teachers -- a philosopher -- once said that real, actual thought is incredibly rare. He's a world-renowned expert but says he can count on one hand the number of times in his life that he felt he was thinking rather than remembering and reorganizing what he already knew.
That's not to say "are you remembering or reasoning" means the same thing when applied to humans vs when it's applied to LLMs.