Hacker News new | past | comments | ask | show | jobs | submit login

I agree. I've just seen it hallucinate too many things that on the surface seem very plausible but are complete fabrications. Basically my trust is near 0 for anything chatgpt, etc. spits out.

My latest challenge is dealing with people that trust chatgp to be infallible, and just quote the garbage to make themselves look like they know what they are talking about.




> things that on the surface seem very plausible but are complete fabrications

LLMs are language model, it's crazy people expect them to be correct in anything beyond surface level language.


Yeah, I was probably being a bit too harsh in my original comment. I do find them useful, you just have to be wary of the output.




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: