Hacker News new | past | comments | ask | show | jobs | submit login

The thought that there's a magical machine that I can just ask for an answer and it's going to provide the correct one is absolutely thrilling, but then I remember that at least 1/3rd of LLM-provided answers in my own domains of knowledge turn out to be wrong upon closer inspection.

I don't think the human brain is evolved to deal with a machine that promises to have all the answers, always speaks with an authoritative tone, and is trained to be agreeable. As a species we like shortcuts and we don't like to think critically, an easy to get answer is always going to be infinitely more appealing than a correct answer, no matter how wrong it is.

Right now there's a generation of kids growing up who believe that they don't have to learn anything because LLMs have all the answers. World leaders don't seem concerned about this, likely because a dumb population who doesn't know how to think critically is easy to control.




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: