Hacker News new | past | comments | ask | show | jobs | submit login

Jim and Toby are interviewing Darryl Philbin for the position of manager at Dunder Mifflin Scranton. Jim asks what Darryl would do to resolve a conflict between two employees in the warehouse Darryl already managed. "I'll answer that, Jim. I would use it as an opportunity to teach... about actions... and consequences of actions".

That's the answer you just gave me. Good note! (Darryl didn't get the job.)

You're dodging my point. If you are managing a team where people are using LLMs to generate pull requests full of "crap" code (your word), you have a mismanaged team, and would with or without the LLMs, because on a well-managed team people don't create PRs full of crap code.

I'm fine if you want to say LLMs are dangerous tools in the hands of unseasoned developers. Fine, you can have a rule where only trusted developers get to use them. That actually seems pretty sane!

But a trustworthy developer using an LLM isn't going to be pushed by the LLM into creating "crap" PRs, because the LLM doesn't make the PRs, the developer does. If the developer isn't reading the code the LLM is producing, they're not doing their job.

Sometimes you get people saying "ok but reading that code is work so how is the LLM saving any time", which is something you could also say about adding any human developer to a team; their code also has to get reviewed.

So help me understand how your concerns here cohere.






You seem to be arguing a point I never made: I never claimed LLMs are "dangerous", nor that crappy code only comes out of LLMs. They are just a tool, and I agree it is the responsibility of the developer wielding them to produce good work with (or without) them — this was never disputed.

I don't have any issue with someone using an LLM but I have not observed any efficiency gain from those who do — that's my entire point, and the biggest selling point for using coding assistants. I've either seen them produce "crappy code" faster (which, ultimately, they could do by hand as well), or be slower than doing their work "manually".

At the same time, I disagree about teams producing lousy PRs being mismanaged by definition: there are circumstances where doing that is warranted (LLM or no LLM), as long as the long term direction is improving (less crappy code over time). There are plenty of nuances there too.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: