Hacker News new | past | comments | ask | show | jobs | submit login

I don’t see any evidence here that AI has helped the poster, or his wife, at all. He also does not say a thing about how it has hurt his critical thinking ability or changed his biases.

I look at posts like these for evidence about whether I should use AI more than I do. (I use it little and don’t pay for it.) What we have here is a devotional post on the order of “I prayed to Jesus and my life is better now.”

The basic problem is that this form of writing and testimonial cannot carry the information necessary to meet the needs of a critical thinker. For instance: recipes. Googling a typical recipe takes 5 seconds. It can hardly get any easier! And it takes me right to someone's specific website. But with ChatGPT there is some significant probability of hallucination, and the answer I get is one no human stands behind. How is this even worth mentioning as a benefit?

That the poster includes recipes tells us about his unserious standards.

Among all the problems of LLMs in society, the noise created by people who are likely in the throes of sunk cost bias, endowment effect, ostrich effect, and other biases is not helping matters.




> And it takes me right to someone's specific website.

Which you then have to assess for believability, just like ChatGPT. And read the comments and incorporate their reactions, or know enough to not need them (but then the search isn’t contributing much, is it?).

> But with ChatGPT there is some significant probability of hallucination

As opposed to ending up on a bogus SEO-“created” web site.

I guess fake recipes are more obvious, but I see a lot of BS on the low end of online recipes too.


You act like there is no benefit to entering a relationship with a fellow human’s website. You act like it is somehow difficult or unpleasant to do that assessment. You act like LLMs are stable and reliable.

I don’t understand how there is any comparison. The only way I can assess the reliability of an LLM is by doing hundreds of trials, analyzing each one for correctness— a gargantuan task. And it’s not like OpenAI has ever done such testing! And even if I did that work (which I HAVE done in other contexts while researching LLM reliability) I cannot assume that it will remain reliable when the model is updated.

Meanwhile there are well known social forces that act on humans to encourage them to not put poison into their recipes. No such force acts on ChatGPT until a kid gets poisoned or some outrageous recipe goes viral.

Sometimes I wonder if certain other people are using some different Internet than I use.


It's much easier to ask ChatGPT "what can I cook using only these <insert-here> ingredients within 30 minutes using only a wok?" than Googling it. Once I have names of dishes that fit my criteria, I can find highly rated recipes on Google.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: