Hacker News new | past | comments | ask | show | jobs | submit login
Scientists aren't using ChatGPT – here's why (nature.com)
22 points by wjb3 on Dec 21, 2023 | hide | past | favorite | 10 comments



I’m a scientist, and I use LLM’s fairly frequently. The first and primary use is for coding. Copilot is quite helpful and automates writing a bunch of the tedious steps in data wrangling and complex plot generation.

I have also been using perplexity.ai lately. Lately I’ve been branching out into a new area of biology that I’m not terribly familiar with, and its been quite helpful in reviewing the literature. I can ask it questions like “what’s the role of gene X in process Y”. Perplexity cites it sources when providing answers and thats a huge benefit over vanilla ChatGPT. I essentially never use ChatGPT for factual queries like that, because I don’t trust it and I don’t have an easy way to check its answers. Perplexity feels much more like reading Wikipedia. I don’t entirely trust the base text, but I can always go directly to the source.

That said Perplexity’s answers lack a level of coherence and specificity that you get from a good human-authored review of a subject. Like, it tends to be too broad in its answers, and will often string together multiple sources that are only tangentially related. Like many LLMs its also quite overconfident, and doesnt like to say “no” or “I dont know” as an answer.


The title is "These scientists aren't using ChatGPT - here's why", which is meaningfully different, and better


I agree. It changes everything .


There are still whole areas in linguistics and archaeology where the current consensus still exists partly as word of mouth, in lecturers’ handouts or in Powerpoint slides at conferences. That is, materials that likely weren’t part of ChatGPT’s training. Consequently, anything ChatGPT would tell you is based on more popular treatments of the subject that might reflect views that are now superseded.


Yeah GPT4 as it exists now is a great teacher if youre new to a subject or are doing basic research. Its use fades as you get deeper into technical or niche subjects. Its fine now for writing emails and other housekeeping tasks. GPT3.5 was not good enough IMO to do any of these things.

If there is another capability leap like there was between 3 and 4, then 5 will certainly have a hugely expanded set of capabilities and uses. From translation to writing to more technical and niche research and learning. I am personally excited for the GPT of nexter-year or whenever that capability leap happens.


> about 78% of researchers do not regularly use generative AI tools such as ChatGPT

22% of scientists who regularly use ChatGPT already sound like a lot to me, and a cause for concern


My roommate is a non-native English speaker, but needs to publish papers in English. This person uses ChatGPT to improve the English grammar. Sounds fine to me.


While very impressive, the latest AI craze is a huge distraction. In fields where original research and discovery is key, AI will for the foreseeable future have little marginal impact. My 2 cents. Canadian.


Because ChatGPT is just a very knowledgeable bullshitter.


tldr: it did not perform well enough to be useful

I concur




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: