I find it to be orders of magnitude more helpful at getting me started on the research journey when I don't know how to formulate the question of what I'm researching.
I find it useful for:
* throwing ideas at a wall and rubber-ducking my emotional state and feelings.
* creating silly, meme images in strange circumstances, sometimes.
* answering simple "what's the name of that movie / song / whatever" questions
Is it always right? Absolutely not. Is it a good starting point? Yes.
Think of it like the school and the early days of Wikipedia. "Can I use Wikipedia as a source? No. But you can use it to find primary sources and use those in your research paper!"
It is my new Google, ever since Web Search and the Web itself stopped to work as a source of information.
When I look for answers to specific questions, I either search Wikipedia, or ask ChatGPT. "Searching the Internet" doesn't work anymore with all the ADs, pop-ups and "optimized" content that I have to consume before I get to find the answers.
Hmm, ChatGPT doesn't seem to provide accurate information often enough to be trustworthy. Have you tried an ad blocker and another search engine like DDG/Bing? Starting with Wikipedia is a good approach too.
It's outrageous at translation. I can set it to record next to my wife and her grandma speaking in Korean, through phone speakers, and it creates a perfect transcription and translation. Insane.
For many things in general for IT related tasks in particular.
a) Write me shell script which does this and that.
b) what Linux command with what arguments do I call to do such and such thing.
c) Write me a function / class in language A/B/C that does this and that
d) write me a SQL query that does this and that
e) use it as a reference book for any programming language or whatever other subject.
etc. etc.
The answers sometimes come out wrong and / or does contain non trivial bugs. Being experienced programmer I usually have no problems spotting those just by looking at generated code or running test case created by ChatGPT. Alternatively there are no bugs but the approach is inefficient. In this case point explaining why it is inefficient and what to use instead ChatGPT will fix the approach. Basically it saves me a shit ton of time.
Do you have any strategy for prompting? I've found I need to spend a lot of effort coaxing it to understand the problem. Hallucinations were pretty common, and it lead me down a couple believable but non-viable paths. Like coding an extension for Chrome, but doing things the permission system will not allow.
They are NOT as unaware of things as we are. That’s like someone seeing a software developer googling stuff and saying “see, they don’t know much more than me”.
An expert refreshing their knowledge on Google is not the same as a layman learning it for the first time. At all.
Ask chatgpt -> get advice-> double check on Google- > try advice -> my medical issue is solved
This has happened to me 3-4 times, hasn't sent me wrong yet. Meanwhile I've had doctors misdiagnose me or my wife a bunch of times in my life. Doctors may have more knowledge but they barely listen, and often don't keep up with the latest stuff.
This is one thing people who shit on chatbots don't get. It doesn't need to be god to be useful, it just needs to beat out a human who is bored, underpaid, and probably under qualified.
Hope it will never hallucinate on you. Doctors will start to need to warn against DoctorGPT MD now, not just Doctor Google MD.
Maybe I'm a low-risk guy but I would never follow a medical solution spit out by an LLC. First, I might be explaining myself badly, hiding important factors that a human person would spot immediately. Then, yeah, the hallucinations issue, and if I have to double check everything anyway, well, just trust a (good) professional.
Yeah, ChatGPT itself is amazing. What I don't understand is, why are other companies paying so much for training hardware now? Trying to make more specialized LLMs now that ChatGPT has proven the technology?