> Also, not using LLMs might be a red flag. I wouldn't want a dev that is not open to this technology...
The person you're replying to is a senior, not junior candidate.
For junior devs who are still learning, LLMs are a great force multiplier that help them understand code faster and integrate new things.
For senior devs, LLMs are a maybe-optional tool that might save a couple hours per week, on a good week. I would consider extremely heavy LLM use a much larger red flag for a senior level position, than not using them at all.
I kind of feel like it is the inverse in many ways.
As an experienced engineer, I know how to describe what I want which is 90% of getting the right implementation.
Secondly, because I know what I want and how it should work, I tend to know it when I see it. Often it only takes a nudge to get to a solution similar to what I already would have done. Usually it is just a quick comment like: "Do it in a functional style." or "This needs to have double check locking around {something}."
When I am working in the edge of my knowledge I can also lean on the model, but I know when I need to validate approaches that I am not sure satisfy my constraints.
A junior engineer doesn't know what they need most of the time and they usually don't understand which are the important constraints to communicate to the model.
I use an LLM to generate probably 50-60% of my code? Certainly it isn't ALWAYS strictly faster, but sometimes it is way way faster. On of the other things that is an advantage is it requires less detailed thinking at the inception phase which allows my do fire off something to build a class, make a change when I am in a context where I can't devote 100% of my attention to it and then review all the code later, still saving a bunch of time.
Worse/less experienced developers see a much greater increase in output, and better and more experienced developers see much less improvement. AI are great at generating junior level work en masse, but their output generally is not up to quality and functionality standards at a more senior level. This is both what I've personally observed and what my peers have said as well.
interesting paper and lots of really well done bits. As a senior dev that uses LLM extensively: This paper was using copilot in 2023 mostly. I used it and chatgpt in that timeframe, and took chatgpts output 90% of the time; copilot was rarely good beyond very basic boilerplate for me, in that time period. Which might explain why it helped jr devs so much in the study.
Somewhat related, i have a good idea what i can and cannot ask chatgpt for, ie when it will and wont help. That is partially usage related and partially dev experience related. I usually ask it to not generate full examples, only minimal snippets which helps quite a bit.
Another factor not brought into consideration here may be that there are two uses of "senior dev" in this conversation so far; one of them refers to a person who has been asked to work on something they're very familiar with (the same tech stack, a similar problem they've encountered etc.) whereas the other one has been asked to work on something unfamiliar.
For the second use case, I can easily see how effectively prompting a model can boost productivity. A few months ago, I had to work on implementing a Docker registry client and I had no idea where to begin, but prompting a model and then reviewing its code, and asking for corrections (such as missing pagination or parameters) allowed me to get said task done in an hour.
So I often use Github Copilot at work usually with o1-preview as the LLM. This often isn't "autocomplete" which generally uses a lower end model, I almost exclusively use the inline chat. That being said.. I do also use the auto-complete a lot when editing. I might create a comment on what I want to do and have it auto-complete, that is usually pretty accurate, and also works well with me since I liked Code Complete comment then implement method.
For example I needed to create a starting point for 4 langchain tools that would use different prompts. They are initially similar but, I'll be deverging them. I would do something like copy the file of one. select all then use the inline chat to ask o1 to rename the file, rip out some stuff and make sure the naming was internally consistent. Then I might attach additional output schema file and the maybe something else I want it to integrate with and tell it to go to town. About 90% of the work is done right.. then I just have to touch up. (This specific use case is not typical, but it is an example where it saved me time, I have them scafolded out and functional while listening to a keynote and in-between meetings.. then in the laster day I validated it. There were a handful of misses that I needed to clean up.)
As someone learning programming with an llm, its 50-50 as to whether it saves or costs me extra time.
This is mostly because if i don't know that i'm asking for the wrong thing, the llm won't correct me and provide code that answers the wrong question and make things up to do that if needed.
Sure i learn by debugging the llm's nonsensical code too, and it solves my "don't want to watch a 2h tutorial because if i just watch the 10minutes that explain what i want to learn, i don't understand any of the context". But it's not much faster with the llm since I need to google things anyway to check if it is gaslighting me.
It does help understanding errors i'm unfamiliar with and the most value i found is pasting in my own code and asking it to explain what the code should do, so i find errors in my logic when it compiles but doesn't have the desired effect. And it will mention concepts i'm lacking to look them up (it won't explain em clearly but at least it's flagging them to me) in a way youtubers earely do.
Still haven't made up my mind if it is a net positive as it often ends up getting on my nerves to wait 10min for a fluff intro before it gets to the answer. Better than a 20min fluff video intro on youtube maybe?
Github copilot still sucks for writing complex code (algorithms or database queries, e.g.). Or trying to do unpopular things (like custom electronics using particular micros and driver chips).
For unit tests, it's a godsend. Particularly if you write one unit test, and then it can write another in the style you wrote.
LLMs can’t write unit tests.
They can’t even tell what you intend. If your code is already correct, you don’t need the unit test, if it’s not, the LLM can’t write the unit test. If you thing an LLM can write tests for you, you can be replaced by an LLM.
Worse is when a protocol or shared state condition is modified.
E.g. suddenly some fresh out of college know-it-all sent crap into your function that you weren't expecting. Then he went to management to blame you for writing such shitty code.
Thing is you wrote unit tests around that code and the shitty know-it-all deleted them rather than changing them when he modified the code
What? Is that a real example? Are you seriously working with people who delete your tests, misuse your code then complain about you to management?
Is your workplace filled with high school students? I’ve never seen anything so petty and immature in my professional career. I hope management told them to grow up.
IMO, the main use case for LLMs in unit tests is through a code completion model like Copilot where you use it to save on some typing.
Of course, there are overzealous managers and their brown-nosing underlings who will say that the LLM can do everything from writing the code itself and the unit tests, end-to-end, but that is usually because they see more value in toeing the line and follow the narratives being pushed from the C-level.
You've got this completely backwards. A Jr with an LLM is a recipe for disaster. They don't know the tech, and have no clue what the LLM is spitting back. They copy code into the abyss.
Meanwhile, a sr with an LLM is a straight up superpower!
I've been in the industry for something like 15 years. I've been using LLMs to help me create the stuff I always wanted but never had time to make myself. This is how LLMs can be used by seniors to great effect - not just to cut time off tasks.
Same here (not in the industry though). I recently got a personal project done with the help of LLM's that I otherwise wouldnt have had the time or energy to research properly if it wasnt for the time savings.
I’ve done so many tiny hobby projects lately that scratch 10+ year itches, where I’ve said so many times “I wish there was an application for this, but I’m too lazy to sit down and learn some Python library and actually do it.” Little utilities that might have taken me a day to bring up a bunch of boilerplate, study a few docs, write the code switching back and forth from the docs, and then debugging. Today that utility takes me 30 minutes tops to write just using Copilot and it’s done.
> For senior devs, LLMs are a maybe-optional tool that might save a couple hours per week, on a good week.
I'm an industrial engineer who writes software and admittedly not a "senior dev", I guess, but LLMs help me save much more than just a few hours of week when crapping out a bunch of Qt/Python code that would cause my eyes to glaze over if I had to plod through it.
The flag you want to see from a senior is reasoned examples of how they use it effectively. Ask for stories about successes and failures. By now, everyone has some.
It is the opposite. Juniors can only solve toy tasks with chatgpt.
Someone with experience can first think through the problem. Maybe use chatgpt for some resarch and fresh up your memory first.
Then you can break up the problem and let chatgpt implement the stuff instead typing everything. Since you are smart and experience you know what chunks of code it can write (basically nothing new. only stuff you could copy pasta before if you had somehow access to all code in the internet yourself).
TLDR: It is way faster to use it. Especially for experienced programmers. Everything else is just ignorant.
The person you're replying to is a senior, not junior candidate.
For junior devs who are still learning, LLMs are a great force multiplier that help them understand code faster and integrate new things.
For senior devs, LLMs are a maybe-optional tool that might save a couple hours per week, on a good week. I would consider extremely heavy LLM use a much larger red flag for a senior level position, than not using them at all.