Hacker News new | past | comments | ask | show | jobs | submit login

> Being biased is not the same as hallucinating. LLMs have both problems.

I didn't deny either of those things, I said that search engines also hallucinate — my actual link gave several examples, including "King of the United States" -> "Barack Obama".

Just because it showed the link to breitbart doesn't mean it was not hallucinating.

> At least you could check whether a source was reputable and where the bias was.

The former does not imply the latter. You could tell where a search engine got an answer from, but not which answers were hidden — an argument that I saw some on the American right make to criticise Google for failing to show their version of events.

> With LLM's the connection between the answer and the source is completely lost. You can't even tell why it answered a certain way.

Also not so. The free version of ChatGPT supports search directly, so it allows you to have references.




> I said that search engines also hallucinate — my actual link gave several examples

They don't. Google added a weird widget that do hallucinate. But the result list is still accurate, even though it may be biased towards certain sources.

> You could tell where a search engine got an answer from, but not which answers were hidden

A bit pedantic, but a search engine returns a list of results according to the query you posted. There's no question-answer oracle. If you type "King of the United States", you will get pages that have the terms listed. Maybe there will be semantic manipulations like "King -> Head of state -> President", but generally it's on you to post the correct keywords.




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: