When using AI, you are still the one responsible for the code. If the AI writes code and you don't read every line, why did it make its way into a commit? If you don't understand every line it wrote, what are you doing? If you don't actually love every line it wrote, why didn't you make it rewrite it with some guidance or rewrite it yourself?
The situation described in the article is similar to having junior developers we don't trust committing code and us releasing it to production and blaming the failure on them.
If a junior on the team does something dumb and causes a big failure, I wonder where the senior engineers and managers were during that situation. We closely supervise and direct the work of those people until they've built the skills and ways of thinking needed to be ready for that kind of autonomy. There are reasons we have multiple developers of varying levels of seniority: trust.
We build relationships with people, and that is why we extend them the trust. We don't extend trust to people until they have demonstrated they are worthy of that trust over a period of time. At the heart of relationships is that we talk to each other and listen to each other, grow and learn about each other, are coachable, get onto the same page with each other. Although there are ways to coach llm's and fine tune them, LLM's don't do nearly as good of a job at this kind of growth and trust building as humans do. LLM's are super useful and absolutely should be worked into the engineering workflow, but they don't deserve the kind of trust that some people erroneously give to them.
You still have to care deeply about your software. If this story talked about inexperienced junior engineers messing up codebases, I'd be wondering where the senior engineers and leadership were in allowing that to mess things up. A huge part of engineering is all about building reliable systems out of unreliable components and always has been. To me this story points to process improvement gaps and ways of thinking people need to change more than it points to the weak points of AI.
The pace differs though. A junior would need a week for a feature an LLM can produce in an hour. And you’re expected to validate that just as quickly. And LLMs are trained to appeal to the reader, unlike an average junior dev. Devs will only get lazy the more they rely on LLMs. It’s like you’re at the university and there’s no homework anymore, just lectures. You’re just passively ingesting data, not getting trained on real problems because you’ve got AI to do that for you. So you’re no longer challenged to grow anymore in your ___domain. What’s left are hard problems that the AI will mislead you on because it’s unfamiliar with them, and your opportunity to learn was lost to delegating to AI. In the end the pressure will grow at work, more features will be expected in shorter time frames. You’ll get even less time to learn and grow as a developer or engineer.
I hear you. But consider this: substitute “LLM” in what you said above with “coworker” or “direct report”.
Does having a coworker automatically make a person dumb and no longer willing or able to grow? Does an engineer who becomes a manager instantly lose their ability to work or grow or learn? Sometimes, yes I know, but it’s not a foregone conclusion.
Agents are a new tool in our arsenal and we get to choose how we use them and what it will do for us, and what it will do to us, each as individuals.
You can substitute anything like “banana” or “parrot” for that and ask the same question.
Change of roles is a twist I didn’t suggest, it’s not related to my argument. I was talking about an engineering role. I’m not seeing an analogy with what you’re suggesting. Even less so does your suggested “immediately” resonate with me. Such transitions are rarely is immediate. Growth on an alternative career path is a different story.
The problem that I see here is that we’re not given that choice you’re considering. Take for example the recent Shopify pivot. It is now expected by the management because they believe the exaggerated hype, especially motivated during the ongoing financing crunch - in many places. So it’s not a lawnmower we’re talking about here but an oracle one would need to be capable of challenging.
The situation described in the article is similar to having junior developers we don't trust committing code and us releasing it to production and blaming the failure on them.
If a junior on the team does something dumb and causes a big failure, I wonder where the senior engineers and managers were during that situation. We closely supervise and direct the work of those people until they've built the skills and ways of thinking needed to be ready for that kind of autonomy. There are reasons we have multiple developers of varying levels of seniority: trust.
We build relationships with people, and that is why we extend them the trust. We don't extend trust to people until they have demonstrated they are worthy of that trust over a period of time. At the heart of relationships is that we talk to each other and listen to each other, grow and learn about each other, are coachable, get onto the same page with each other. Although there are ways to coach llm's and fine tune them, LLM's don't do nearly as good of a job at this kind of growth and trust building as humans do. LLM's are super useful and absolutely should be worked into the engineering workflow, but they don't deserve the kind of trust that some people erroneously give to them.
You still have to care deeply about your software. If this story talked about inexperienced junior engineers messing up codebases, I'd be wondering where the senior engineers and leadership were in allowing that to mess things up. A huge part of engineering is all about building reliable systems out of unreliable components and always has been. To me this story points to process improvement gaps and ways of thinking people need to change more than it points to the weak points of AI.