Hacker News new | past | comments | ask | show | jobs | submit login

To properly explain this would take longer than the length of the comment limit (is there a length limit? I don't know, but even if there isn't I don't feel like explaining this for the 70th time), but here's why: chrome-extension://efaidnbmnnnibpcajpcglclefindmkaj/https://arxiv.org/pdf/2301.06627.pdf To sum up: a human can think outside of their training distribution, an LLM cannot. A larger training distribution simply means you have to go farther outside the norm. In order to solve this problem would require multiple other processing architectures besides an LLM, and a human-like AGI cannot be reached by simply predicting upcoming words. Functional language processing (formal reasoning, social cognition, etc) require other modules in the brain.

This explains the various pejorative names given to LLMs - stochastic parrots, Chinese Rooms - etc, etc, etc.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: