In the time since this publication, the Nobel Prize in Chemistry was awarded to an ML Engineer & AI researcher.
While it's accurate that there's a lot of AI slop out there, it's also a reasonable assertion that working at the frontier of AI & ML research can be a worthwhile use of your life.
Definitely has been true for my work. LLMs have absolutely have been useful, I even forked an IDE (Zed) to add my own custom copilot to leverage a deeper integration for my work.
But even if we consider AI beyond just NLP, there's been so much ML you can apply to other more banal day to day tasks. In my org's case, one of the big ones was anomaly detection and fault localization in aggregate network telemetry data. Worked far better than conventional statistical modeling.
I usually assume there is a caricature of "AI Tools" that all of the detractors are working backwards from that are often nothing more than a canard for the folks that are actually using AI tooling successfully in their work.
We never have any proof or source for that, and when we have one (like the Devin thing) it’s always a ridiculous project in JS with a hundred lines of code that I could write in one day.
Give me some refactoring in a C++ code base with 100k lines of code, and we’ll be able to talk.
Anything with using tools which you are not an expert with. If you know how to do things and only use one specific language or framework -- there is nothing to use AI for.
This whole area is so drenched in bullshit, it's no wonder that the generation of BS and fluff is still the most productive use. Just nothing where reliable facts matter. I do believe that machines can vomit crap 10x as fast as humans.
I had to sign a 140 page contract of foreign language legalese. Mostly boiler plate, but I had specific questions about it.
Asking questions to an AI to get the specific page answering it meant I could do the job in 2 hours. Without an AI, it would have taken me 2 days.
For programming, it's very good at creating boilerplate, tests, docs, generic API endpoints, script argument parsing, script one-liners, etc. Basically anything for which, me, as a human, don't have much added value.
It's much faster to generate imperfect things with AI and fix them that to write them myself when there is a lot of volume.
It's also pretty good at fixing typos, translating, giving word definition, and so on. Meaning if you are already in the chat, no need to switch to a dedicated tool.
I don't personally get 10x on average (although on specific well suited task I can) but I can get a good X3 on a regular basis.
But what you're doing isn't a real job. Who hands someone who doesn't speakt the language a contract to sign? Don't you have a legal department that does this job for you and has people that are specialists for that?
Also, what are you going to do if the AI answered inaccurately and you signed a contract that says something different then what you thought?
I am actually pretty sure that the thing described literally isn’t a real job, at least not working for a serious employer. I can’t imagine a company telling someone to sign contracts in a language they can’t speak and somehow try to make a sense of it.
Either it’s their own company and they’re doing something unwise, they are doing it without the knowledge of their superior or their company shouldn’t be trusted with anything.
The point was that „AI helps me translate the contracts I want to sign“ isn’t a good example of „AI increases my productivity“ because that’s not something you should ever do.
But you shouldn't do some stuff you can't do properly at all, not quickly and not slowly. As a layman, you can't sign a contract in a language you don't speak, even if you have a whole year, unless you can become more-than-fluent in that language in a single year. That's just not something you should do, and the AI isn't reliable enough to help you with it. That's what a legal department is for.
I would never in my whole life sign anything in a foreign language that I don’t understand. It’s the perfect example of what AI is: let’s do anything that looks like a job well done and fuck it. That is not convincing. It’s suicidal.
I don't know that I'd say _useless_ but there's _a long way_ to go. I also suspect that many developers are going to wind up being dependent upon them at the expense of getting familiar with reading code/documentation, exploring with debuggers, writing forum/blog posts, etc.
Here is one example from the last time I asked Claude a question about message filtering on AWS SQS, which is a very common exchange in my (relatively limited) experience with these tools.tool
# me responding to a suggestion to use a feature which I was pretty sure did not exist
> ... and you're sure this strategy works with SQS FIFO queues?
# Claude apologizing for hallucinating
> I apologize - I need to correct my previous responses. I made a mistake - message filtering with message attributes is NOT supported with FIFO queues. This is an important limitation of FIFO queues.
If I didn't already have familiarity with this service and its feature set, I would have wasted 15/30/60 mins trying to validate the suggestion.
I can't imagine trying to debug code that was automatically generated and inserted into my toolchain using Copilot or whatever else. Again, I'm sure this will all get better but I'm not convinced that any of these tools should be the first ones we reach for.
> Nearly two years on from this article's publication, 'AI' tools are still as useless as ever for productive work.
Fully disagree. When developing I use llm's all the time. its far quicker for me to ask an llm to build a react component than for me to type it and have to remember all the little nuances for a technology I use a couple of times a year.
When proto typing things or doing quick one offs, using an LLM makes me 10x more productive.
OK, but React development is one of the simplest development tasks out there. It’s probably the most popular UI framework with the most data for the LLM model to generate off existing solutions. Low hanging fruit that also provides the least value to companies since react developers are a dime a dozen.
Also, one could argue that the LLM provides less value since a simple Google search for most react questions can produce a very viable answer, since 90% of what most React developers do has already been done and shared on the Internet.
Chatgpt is really good at small throwaway scripts to accomplish some task that could have taken a good portion of the day to write up otherwise (almost always gets the code right on first try)
It can add usability features pretty well too. "Look at the variables in the script and add a feature to accept them as command line arguments and environment variables. Make the CONTENT variable mandatory."
They are also pretty good at software translations nowadays. You can give them a yaml file and they know what's a variable or placeholder and what's text to be translated.
They can be incredibly useful for many kinds of knowledge work. I've been much more productive using LLMs than before. I'm at the point where I dont know how I ever managed without.
the difference is that before LLMs you had to use search to find answers to unknowns. The productivity boost of knowing everything immediately doesnt fundamentally change the work itself. I understand over reliance might become problematic at some point. My solution to that would be a bunch of GPUs so I have an R1 locally, anytime. the future is here, some people just havent noticed.
I find this take interesting in the present. I work at $BIGCO that has done lots of ML in house for many years. I have noticed more and more ML projects being killed because their work is superseded by external model vendors. Why spend millions on a team to build a new model, when you can pay thousands to rent someone elses?
It's a good question, and there are sometimes good answers. Does someone else's model understand how to standardize and interpret your company's data? Will they notice and tell you if something isn't looking right? What if the model needs to change to account for opportunities or risks from evolving data sources or business environment? Building in-house can help ensure that an organization has the competencies to do these things without depending on someone else.
I hope this removes some people from the general dev pool who jumped because someone told them to "learn to code" and they thought it was a good way to get rich.
> All that being said, AI is not for every software engineer. I’ve seen about a 50-50 success rate of engineers entering this field. The most important determiner is a specific flavor of technical humility.
Oh, give me an f-ing break and spare me the self-congratulation.
The TFA's AI-generated image "Harnessing the most compute in the known universe" that captures the entire globe in a blue hue of AI compute is a bit disturbing, TBH.