I give AI a “water cooler chat” level of veracity, which means it’s about as true as chatting with a coworker at a water cooler when that used to happen. Which is to say if I just need to file the information away as a “huh” it’s fine, but if I need to act on it or cite it, I need to do deeper research.
Yes, so often I see/hear people asking "But how can you trust it?!"
I'm asking it a question about social dynamics in the USSR, what's the worst thing that'll happen?! I'll get the wrong impression?
What are people using this for? are you building a nuclear reactor where every mistake is catastrophic?
Almost none of my interactions with LLMs "Matter", they are things I'm curious about, if 10 out of 100 things I learnt from it are false, then I learned 90 new things. And these are things which mostly I'd have no way to learn about otherwise (without spending significant money on books/classes etc.)
I try hard not to pollute my learning with falsehoods. Like I really hate spending time learning bs, not knowing is way better than knowing something wrong.
That's certainly true, but I think it's also true that you have more contextual information about the trustworthiness of what you're reading when you pick up a book, magazine, or load a website.
As a simple example, LLMs will happily incorporate "facts" learned from marketing material into it's knowledgebase and then regurgitate it as part of a summary on the topic.