Hacker News new | past | comments | ask | show | jobs | submit login

Why do people have such narrow views on what makes LLMs useful? I use them for basically everything.

My son throwing an irrational tantrum at the amusement park and I can't figure out why he's like that (he won't tell me or he doesn't know himself either) or what I should do? I feed Claude all the facts of what happened that day and ask for advice. Even if I don't agree with the advice, at the very least the analysis helps me understand/hypothesize what's going on with him. Sure beats having to wait until Monday to call up professionals. And in my experience, those professionals don't do a better job of giving me advice than Claude does.

It's weekend, my wife is sick, the general practitioner is closed, the emergency weekend line has 35 people in the queue, and I want some quick half-assed medical guidance that while I know might not be 100% reliable, is still better than nothing for the next 2 hours? Feed all the symptoms and facts to Claude/ChatGPT and it does an okay job a lot of the time.

I've been visiting Traditional Chinese Medicine (TCM) practitioner for a week now and my symptoms are indeed reducing. But TCM paradigm and concepts are so different from western medicine paradigms and concepts that I can't understand the doctor's explanation at all. Again, Claude does a reasonable job of explaining to me what's going on or why it works from a western medicine point of view.

Want to write a novel? Brainstorm ideas with GPT-4o.

I had a debate with a friend's child over the correct spelling of a Dutch word ("instabiel" vs "onstabiel"). Google results were not very clear. ChatGPT explained it clearly.

Just where is this "useless" idea coming from? Do people not have a life outside of coding?




Yes people have lives outside of coding, but most people are able to manage without having AI software intercede in as much of their lives as possible.

It seems like you trust AI more than people and prefer it to direct human interaction. That seems to be satisfying a need for you that most people don't have.


Why do you postulate that "most people don't have" this need? I also use AI non-stop throughout my day for similar uses.

This feels identical to when I was an early "smart phone" user w/my palm pilot. People would condescend saying they didn't understand why I was "on it all the time". A decade or two later, I'm the one trying to get others to put down their phones during meetings.

My take? Those who aren't using AI continually currently are simply later adopters of AI. Give it a few years - or at most a decade - and the idea of NOT asking 100+ AI queries per day (or per hour) will seem positively quaint.


>Those who aren't using AI continually currently are simply later adopters of AI. Give it a few years - or at most a decade - and the idea of NOT asking 100+ AI queries per day (or per hour) will seem positively quaint.

I don't think you're wrong, I just think a future in which it's all but physically and socially impossible to have a single thought or communication not mediated by software is fucking terrifying.


When I'm done working, chased my children to properly finish their dinner, helped my son with homework, and putting them to bed, it's already 9+ PM — the only time of the day when I have free time. Just which human besides my wife can I talk to at that point? What if she doesn't have a clue either? All the professionals are only open when I'm working. A lot of the issues happen during the weekend, when professionals are closed. I don't want to disturb friends during the evening, and it's not like they have the expertise I need anyway.

LLMs are infinitely patient, don't think I am dumb for asking certain things, consider all the information I feed them, are available whenever I need them, have a wide range of expertise, and are dirt cheap compared to professionals.

That they might hallucinate is not a blocker most of the time. If the information I require is critical, I can always double check with my own research or with professionals (in which case the LLM has already primed me with a basic mental model so that I can ask quick, short, targeted questions, which saves the both of us time, and me money). For everything else (such as my curiocity on why TCM works, or the correct spelling of a word), LLMs are good enough.


You are supposed to have connections with knowledgeable people, so you can call them and ask for advice. That's how it works without computers.


Did you miss the parts where I said that I only have time when they're closed, and they're only open when I'm most busy?

Have you never seen knowledgeable people get things wrong, and having to verify them?

Did you miss the part where they cost money, and I better come in as prepared as possible?

I really don't get these knee-jerk averse reactions. Are people deliberately reading past my assertions that I double check LLM outputs for everything critical?


> LLMs are infinitely patient, don't think I am dumb for asking certain things

We don't know that. They could be laughing their ass off at you without telling you.


At the risk of sounding impolite or critical of your personal choices: this, right here, is the problem!

You don’t understand how medicine works, at any level.

Yet you turn to a machine for advice, and take it at face value.

I say these things confidently, because I do understand medicine well enough to not to seek my own answers. Recently I went to a doctor for a serious condition and every notion I had was wrong. Provably wrong!

I see the same behaviour in junior developers that simply copy-paste in whatever they see in StackOverflow or whatever they got out of ChatGPT with a terrible prompt, no context, and no understanding on their part of the suitability of the answer.

This is why I and many others still consider AIs mostly useless. The human in the loop is still the critical element. Replace the human with someone that thinks that powdered rhino horn will give them erections, and the utility of the AI drops to near zero. Worse, it can multiply bad tendencies and bad ideas.

I’m sure someone somewhere is asking DeepSeek how best to get endangered animals parts on the black market.


No. Where do you read that I take it at face value? I literally said that I expect Claude to give me "half-assed" medical guidance. I merely said that that is still better than having no clue for the next 2 hours while I wait on the phone with 35 people in front of me, which is completely different from "taking medicine advice at face value". It's not like I will let my wife drink bleach just because Claude told me to. But if it tells me that it's likely an ear infection then at least I can discuss the possibility with the doctor.

So I am curious about how TCM works. So what if an LLM hallucinates there? I am not writing papers on TCM or advising governments on TCM policy. I still follow the doctor's instructions at the end of the day.

For anything really critical I already double check with professionals. As you said, human in the loop is important. But needing human in the loop does not make it useless.

You are letting perfect be the enemy of good. A half-assed tax advice with some hallucinations from an LLM is still useful, because it will prime me with a basic mental model. When I later double check the whole thing with a professional, I will already know what questions to ask and what direction I need to explore, which saves time and money compared to going in with a blank slate.

The other day I had Claude advice me on how to write a letter to a judge to fight a traffic fine. We discuss what arguments to make, from what perspective a judge will see things, and thus what I should plead for. The traffic fine is a few hundred euros: a significant amount, but barely an hour worth of a real lawyer's fee. It makes absolutely no sense to hire a real lawyer here. If this fails, the worst thing that can happen is that I won't get my traffic fine reimbursed.

There is absolutely nothing wrong with using LLMs when you know their limits and how to mitigate them.

So what if every notion you learned about medicine from LLMs is wrong? You learn why they're wrong, then next time you prompt/double check better, until you learn how to use it for that field in the least hallucinationatory way. Your experience also doesn't match mine: the advice I get usually contains useful elements that I then discuss with doctors. Plus, doctors can make mistakes too, and they can fail to consider some things. Twitter is full of stories about doctors who failed to diagnose something but ChatGPT got it right.

Stop letting perfect be the enemy of good. Occasionally needing human in the loop is completely fine.


To be fair though, humanity doesn't know how some medicines work at a fundamental level either. The method of action for Tylenol, lithium, and metformin, among others isn't fully understood.


True, but modern "western"[1] medicine is not about the specific chemicals used, or even knowing how they exactly work at a chemical level, but the process for identifying what does and what does not work. It's an "evidence based" science with with experiments designed to counter known biases such as the placebo effect. Much of what we consider modern medicine was developed before we were entirely sure that atoms actually existed!

[1] It isn't actually western, because it's also used in the east, middle-east, south, both sides of every divide, etc... In the same sense, there is no "western chemistry" as an alternative to "eastern alchemy". There's "things that work" versus "things that make you feel slightly better because they're mild narcotics or stimulants... at best."

(I don't want to focus too much on Chinese herbal medicine, because I see the same cargo-culting non-scientific thinking in code development too. I've lost count of the number of times I've seen an n-tier SPA monstrosity developed for something that needed a tiny monolithic web app, but mumble-mumble-best-mumble-practices.)


"Western medicine" (which is exactly what it is called in China, to contrast with TCM) is shorthand for "practices invented in the west". That these methods chase universal truths, or are practiced world-wide, do not make them "non-west" in terms of origin.

The Chinese call the practice of truth seeking, in a more broader sense (outside of medicine) just "science".

"Western" medicine is also not merely the practice of seeking universal medical truth. It is also a collection of paradigms that have been developed in its long history. Like all paradigms, there are limits and drawbacks: phenomena that do not fit well. Truth seeking tends to be done on established paradigms rather than completely new ones.

The "western" prefix is helpful in contrasting it with TCM, which has a completely different paradigm. Many Chinese, myself included, have the experience that there are all sorts of ailments that are not meaningfully solved by "western" medicine practitioners, but are meaningfully solved by TCM practitioners.


This reads like satire to me. Scarry that it isn't.


I'm guessing that mindset is what cause some people to find this scary. I see a new tool and opportunities. Like all tools, it has drawbacks and caveats, but when wielded properly, it can give me more choice. I suspect some others focus too much on flaws and don't bother looking for opportunities. They are expecting a holy grail: if it's not perfect then it's useless.

It's like people who proclaim that Linux as a whole is a useless toy because it doesn't run their favorite games or favorite Windows app. They focus on this one flaw and miss all the opportunities.

Many of these people seem to advocate trusting human professionals. Do you have any idea how often human professionals do a half-assed job, and I have to verify them rather than blindly trusting them? The situation is not that much different from LLMs.

Professionals making mistakes do not make them useless. Grandma, with all her armchair expertise, is often right and sometimes wrong, and that does not make her useless either.

Why let perfect be the enemy of good?


Grandma has a reason to care about you.

At the opposite, my trust of Russian / Chinese / USian platforms is low enough that I consider it my duty to publicly shame people that still use them in 2025.

(With some caveats of course, for instance HN is not a yet negative to the world. Yet.)

There's also the question of stickiness of habits : your grandmas are for life, human professionals you might have a shallow enough relationship with that switching them might be relatively easy, while it might be very hard to stop smoking or to stop using Github once you started smoking / create an account.


You view Github and LLMs as traps that deliberately give you malicious advice or even brainwash you into addiction? If you view things that way then it's no surprise that you are averse to LLMs (and Github). But frankly I find that entire view to be absurd and overly cynical.


I too read it as satire at first, but after thinking twice I think it's a quite reasonable take. I've added "utilize LLM more in my daily life outside programming" to my new year resolution.


I had the flu at the beginning of December, with high fever, the whole nine yards. Keeping a running log with Claude in which I shared temperature readings, medications etc. has been so useful. If nothing else it's the world's most sophisticated rubber duck / secretary, but that's quite useful in many daily life situations on its own. Caveats apply etc.


Huh? The GP makes perfect sense. I’d never trust LLMs blindly, but I wouldn’t hesitate to ask them about any topic. “Trust but verify” is often said about human beings. Perhaps “distrust but ask and verify” is the mantra applicable for LLMs.


Scary. Reads rather like you're well on your way to replace basic life skills with reliance on LLMs.




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: