Hacker News new | past | comments | ask | show | jobs | submit login

It may not be AGI, but I don't think it's for that reason. Many humans would make the exact same error by reading too quickly and seeing "pound [of] coin", and I would still consider them of "general intelligence."



It's nevertheless interesting how LLMs seem to default to the 'fast thinking' mode of human interaction -- even CoT approaches seem to just be mimicking 'slow thinking' by forcing the LLM to iterate through different options. The failure modes I see are very often the sort of thing I would do if I were unfocused or uninterested in a problem.




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: