I feel like the benefit which AI gives us programmers is limited. They can be extremely advanced, accelerative and helpful assistants, but we're limited to just that: architecting and developing software.
Biologists, mathematicians, physicists, philosophers and the like seem to have an open-ended benefit from the research which AI is now starting to enable. I kind of envy them.
I don't think AI is trustworthy or accurate enough to be valuable for anyone trying to do real science
That doesn't mean they won't try though. I think the replication crisis has illustrated how many researchers actually care about correctness versus just publishing papers
If you're a skilled researcher I expect you should be able to get great results out of unreliable AI assistants already.
Scientists are meant to be good at verifying and double-checking results - similar to how journalists have to learn to derive the truth from unreliable sources.
These are skills that turn out to be crucial when working with LLMs.
Don't make the mistake of assuming all journalists are the same. There's a big difference between an investigative reporter at a respected publication and someone who gets paid to write clickbait.
Figuring out that the data is faulty is part of research.
That's what (good) journalism is: the craft of hunting down sources of information, figuring out how accurate and reliable they are and piecing tougher as close to the truth as you can get.
A friend of mine is an investigative reporter for a major publication. They once told me that an effective trick for figuring out what's happening in a political story is to play different sources off against each other - tell one source snippets of information you've got from another source to see if they'll rebut or support it, or if they'll leak you a new detail because what you've got already makes them look bad.
Obviously these sources are all inherently biased and flawed! They'll lie to you because they have an agenda. Your job is to figure out that agenda and figure out which bits are true.
The best way to confirm a fact is to hear about it from multiple sources who don't know who else you are talking to.
That's part of how the human intelligence side of journalism works. This is why I think journalists are particularly well suited to dealing with LLMs - human sources lie and mislead and hallucinate to them all the time already. They know how to get (as close as possible) to the truth.
Same with using AI for coding. I can’t imagine someone having the expectation to use the LLM output verbatim but maybe I’m just not good enough at prompting.
Manual testing, automated testing, and code review
All three of those things are things that software engineers rather reliably are bad at and cut corners on, because they are the least engaging and least interesting part of the job of building software
Yep. Engineers who aren't willing to invest in those skills will have limited success with AI-assisted development.
I've seen a few people state that they don't like using LLMs because it takes away the fun part (writing the code) and leaves them with the bits they don't enjoy.
Biologists, mathematicians, physicists, and philosophers are already the experts who produce the text in their ___domain that the LLMs might have been trained on...
Biologists, mathematicians, physicists, philosophers and the like seem to have an open-ended benefit from the research which AI is now starting to enable. I kind of envy them.
Unless one moves into AI research?