Hacker News new | past | comments | ask | show | jobs | submit login

ChatGPT is very useful for me to the point that I pay subscription fee. To me it IS the product.



I haven't found a use for it. What do you use it for?


I find it to be orders of magnitude more helpful at getting me started on the research journey when I don't know how to formulate the question of what I'm researching.

I find it useful for:

  * throwing ideas at a wall and rubber-ducking my emotional state and feelings.
  * creating silly, meme images in strange circumstances, sometimes.
  * answering simple "what's the name of that movie / song / whatever" questions
Is it always right? Absolutely not. Is it a good starting point? Yes.

Think of it like the school and the early days of Wikipedia. "Can I use Wikipedia as a source? No. But you can use it to find primary sources and use those in your research paper!"


It is my new Google, ever since Web Search and the Web itself stopped to work as a source of information.

When I look for answers to specific questions, I either search Wikipedia, or ask ChatGPT. "Searching the Internet" doesn't work anymore with all the ADs, pop-ups and "optimized" content that I have to consume before I get to find the answers.


Hmm, ChatGPT doesn't seem to provide accurate information often enough to be trustworthy. Have you tried an ad blocker and another search engine like DDG/Bing? Starting with Wikipedia is a good approach too.


It's outrageous at translation. I can set it to record next to my wife and her grandma speaking in Korean, through phone speakers, and it creates a perfect transcription and translation. Insane.


How are you checking that the translation is correct?


Wife is next to me and checks after. Never had a single wrong word.


what is the delay? i imagine translation specific models would be better?


Like a minute, it's not real time. You click record, let it transcribe then ask it to translate


Everything!

Its like talking to a intelligent person about a topic you want to learn, but they know it good enough to teach you if you keep asking questions.


I've found it to be like talking to someone who pretends to know things, but lacks understanding when you probe further.


Probe for summarised information, don't outsource your thinking to AI.

Like even when you are writing code. Describe the solution and ask AI to write it, don't specify the requirements to AI and hope it will write it.


For many things in general for IT related tasks in particular.

a) Write me shell script which does this and that. b) what Linux command with what arguments do I call to do such and such thing. c) Write me a function / class in language A/B/C that does this and that d) write me a SQL query that does this and that e) use it as a reference book for any programming language or whatever other subject.

etc. etc.

The answers sometimes come out wrong and / or does contain non trivial bugs. Being experienced programmer I usually have no problems spotting those just by looking at generated code or running test case created by ChatGPT. Alternatively there are no bugs but the approach is inefficient. In this case point explaining why it is inefficient and what to use instead ChatGPT will fix the approach. Basically it saves me a shit ton of time.


Do you have any strategy for prompting? I've found I need to spend a lot of effort coaxing it to understand the problem. Hallucinations were pretty common, and it lead me down a couple believable but non-viable paths. Like coding an extension for Chrome, but doing things the permission system will not allow.


I use it to auto generate responses to ridiculous rhetorical questions on hacker news


It gives amazing medical advice and answers, way better than webmd or frankly even most primary care physicians ive seen.


Are you a doctor? On what basis are you evaluating medical advice?


Whether or not it works!


Terrifying. You must like living dangerously.


I am lucky enough to have several family members who are MDs and am close to a few more.

They google crap all the time. They're as unaware of things as we are.

The tests they have access to are much better than anything we can get our hands on though.


They are NOT as unaware of things as we are. That’s like someone seeing a software developer googling stuff and saying “see, they don’t know much more than me”.

An expert refreshing their knowledge on Google is not the same as a layman learning it for the first time. At all.


How do you know that? Are you a doctor?


Ask chatgpt -> get advice-> double check on Google- > try advice -> my medical issue is solved

This has happened to me 3-4 times, hasn't sent me wrong yet. Meanwhile I've had doctors misdiagnose me or my wife a bunch of times in my life. Doctors may have more knowledge but they barely listen, and often don't keep up with the latest stuff.


This is one thing people who shit on chatbots don't get. It doesn't need to be god to be useful, it just needs to beat out a human who is bored, underpaid, and probably under qualified.


Hope it will never hallucinate on you. Doctors will start to need to warn against DoctorGPT MD now, not just Doctor Google MD.

Maybe I'm a low-risk guy but I would never follow a medical solution spit out by an LLC. First, I might be explaining myself badly, hiding important factors that a human person would spot immediately. Then, yeah, the hallucinations issue, and if I have to double check everything anyway, well, just trust a (good) professional.


Yeah, ChatGPT itself is amazing. What I don't understand is, why are other companies paying so much for training hardware now? Trying to make more specialized LLMs now that ChatGPT has proven the technology?




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: