Hacker News new | past | comments | ask | show | jobs | submit login

I think we can be chill about it. For someone on the sidelines, mildly interested in the topic, an AI summary of the regex might be useful — and the person getting downvotes is saving some of us the time.



If I'm feeling ill, should I trust Google when it says that I am probably having "internet connectivity problems", or should I go to a proper doctor instead?

"LLM says X" is the new "I Googled it and here's a copy/paste of the top result". It adds literally nothing of value to the discussion.

Unless of course the discussion is specifically about the quality of LLMs - in which case you should be vetting the answer yourself so you can actually say something meaningful about it.


Yes, it might be misinformation, but it's convenient misinformation, is that the idea? And yes, it's true that a source like wikipedia could be too, but that has to withstand scrutiny to remain on the site, no such checks or balances in AI.


It's clearly labeled as the output of Sonnet 3.7, not truth. We all need to apply our own critical analysis of anything we read, whether it's claimed to be from an LLM or from Wikipedia. The possibility of inaccuracy isn't a reason to withhold comment.


So what do we do with that information? If I apply a critical framework around interpreting the LLM output, the answer is to reject it for being both not necessarily true but also knowing that the LLM isn't even trying to be correct, it's strictly trying to produce convincing sentences.

What value does a link to a source that's not held to any standard to be informative do? Seems a waste of everyone's time to me.


Yes, I'd rather have it cited as an unreliable source than uncited. Every human will happily regurgitate misinformation they've absorbed unknowingly, so it's not that much worse than a confident human comment with no real citation besides themself.

However, I also think it's valid to question what it adds to a conversation if someone is quoting it verbatim. Would we be happier if HN was like Quora and automatically added an AI response to everything?


Convenient misinformation? Well that's one way to put way too fine a point on it.

That or I must be a lot more LLM-accepting than most of HN.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: