Hacker News new | past | comments | ask | show | jobs | submit login

I give AI a “water cooler chat” level of veracity, which means it’s about as true as chatting with a coworker at a water cooler when that used to happen. Which is to say if I just need to file the information away as a “huh” it’s fine, but if I need to act on it or cite it, I need to do deeper research.



Yes, so often I see/hear people asking "But how can you trust it?!"

I'm asking it a question about social dynamics in the USSR, what's the worst thing that'll happen?! I'll get the wrong impression?

What are people using this for? are you building a nuclear reactor where every mistake is catastrophic?

Almost none of my interactions with LLMs "Matter", they are things I'm curious about, if 10 out of 100 things I learnt from it are false, then I learned 90 new things. And these are things which mostly I'd have no way to learn about otherwise (without spending significant money on books/classes etc.)


I try hard not to pollute my learning with falsehoods. Like I really hate spending time learning bs, not knowing is way better than knowing something wrong.


If you don't care if it's correct or not you can also just make the stuff up. No need to pay for AI to do it for you.


Yes, but how do you know which is which?


That is also a broader epistemological question one could ask about truth on the internet or even truth in general. You have to interrogate reality


That's certainly true, but I think it's also true that you have more contextual information about the trustworthiness of what you're reading when you pick up a book, magazine, or load a website.

As a simple example, LLMs will happily incorporate "facts" learned from marketing material into it's knowledgebase and then regurgitate it as part of a summary on the topic.




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: