> The first is that the LLM outputs are not consistently good or bad - the LLM can put out 9 good MRs before the 10th one has some critical bug or architecture mistake. This means you need to be hypervigilant of everything the LLM produces, and you need to review everything with the kind of care with which you review intern contributions.
This is not a counter-argument, but this is true of any software engineer as well. Maybe for really good engineers it can be 1/100 or 1/1000 instead, but critical mistakes are inevitable.
This is not a counter-argument, but this is true of any software engineer as well. Maybe for really good engineers it can be 1/100 or 1/1000 instead, but critical mistakes are inevitable.