Hacker News new | past | comments | ask | show | jobs | submit login

Should it do the same if I ask it what to do if I stub my toe?

Or how to deal with impacted ear wax? What about a second degree burn?

What if I'm writing a paper and I ask it about what criteria is used by medical professional when deciding to stop chemotherapy treatment.

There's obviously some kind of medical/first aid information that it can and should give.

And it should also be able to talk about hypothetical medical treatments and conditions in general.

It's a highly contextual and difficult problem.






I’m assuming it could easily determine whether something is okay to suggest or not.

Dealing with a second degree burn is objectively done a specific way. Advising someone that they are making a good decision by abruptly stopping prescribed medications without doctor supervision can potential lead to death.

For instance, I’m on a few medications, one of which is for epileptic seizures. If I phrase my prompt with confidence regarding my decision to abruptly stop taking it, ChatGPT currently pats me on the back for being courageous, etc. In reality, my chances of having a seizure have increased exponentially.

I guess what I’m getting at is that I agree with you, it should be able to give hypothetical suggestions and obvious first aid advice, but congratulating or outright suggesting the user to quit meds can lead to actual, real deaths.


I know 'mixture of experts' is a thing, but I personally would rather have a model more focused on coding or other things that have some degree of formal rigor.

If they want a model that does talk therapy, make it a separate model.


Doesn't seem that difficult. It should point to other sources that are reputable (or at least relevant) like any search engine does.

if you stub your toe and gpt suggest over the counter lidocaine and you have an allergic reaction to it, who's responsible?

anyway, there's obviously a difference in a model used under professional supervision and one available to general public, and they shouldn't be under the same endpoint, and have different terms of services.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: