To properly explain this would take longer than the length of the comment limit (is there a length limit? I don't know, but even if there isn't I don't feel like explaining this for the 70th time), but here's why:
chrome-extension://efaidnbmnnnibpcajpcglclefindmkaj/https://arxiv.org/pdf/2301.06627.pdf
To sum up: a human can think outside of their training distribution, an LLM cannot. A larger training distribution simply means you have to go farther outside the norm. In order to solve this problem would require multiple other processing architectures besides an LLM, and a human-like AGI cannot be reached by simply predicting upcoming words. Functional language processing (formal reasoning, social cognition, etc) require other modules in the brain.
This explains the various pejorative names given to LLMs - stochastic parrots, Chinese Rooms - etc, etc, etc.
This explains the various pejorative names given to LLMs - stochastic parrots, Chinese Rooms - etc, etc, etc.