Hacker News new | past | comments | ask | show | jobs | submit login

I think it depends on the stakes of what you're building.

A lot of the concerns you describe make me think you work in a larger company or team and so both the organizational stakes (maintenance, future changes, tech debt, other people taking it over) and the functional stakes (bug free, performant, secure, etc) are high?

If the person you're responding to is cranking out a personal SaaS project or something they won't ever want to maintain much, then they can do different math on risks.

And probably also the language you're using, and the actual code itself.

Porting a multi-thousand line web SaaS product in Typescript that's just CRUD operations and cranking out web views? Sure why not.

Porting a multi-thousand line game codebase that's performance-critical and written in C++? Probably not.

That said, I am super fascinated by the approach of "let the LLM write the code and coach it when it gets it wrong" and I feel like I want to try that.. But probably not on a work project, and maybe just on a personal project.




> Porting a multi-thousand line web SaaS product in Typescript that's just CRUD operations and cranking out web views? Sure why not. > > Porting a multi-thousand line game codebase that's performance-critical and written in C++? Probably not.

From my own experience:

I really enjoy CoPilot to support me writing a terraform provider. I think this works well because we have hundreds of existing terraform providers with the same boilerplate and the same REST-handling already. Here, the LLM can crank out oodles and oodles of identical boilerplate that's easy to review and deal with. Huge productivity boost. Maybe we should have better frameworks and languages for this, but alas...

I've also tried using CoPilot on a personal Godot project. I turned it off after a day, because it was so distracting with nonsense. Thinking about it along these lines, I would not be surprised if this occurred because the high-level code of games (think what AAA games do in Lua, and well what Godot does in GDScript) tends to be small-volume and rather erratic within there. Here there is no real pattern to follow.

This could also be a cause for the huge difference in LLM productivity boosts people report. If you need Spring Boot code to put query params into an ORM and turn that into JSON, it can probably do that. If you need embedded C code for an obscure micro controller.. yeah, good luck.


> If you need embedded C code for an obscure micro controller.. yeah, good luck.

... or even information in the embedded world. LLMs need to generate something, o they'll generate code even when the answer is "no dude, your chip doesn't support that".


> they'll generate code even when the answer is "no dude, your chip doesn't support that".

This is precisely the problem. As I point out elsewhere[0], reviewing AI-generated code is _not_ the same thing as reviewing code written by someone else, because you can ask a human author what they were thinking and get a moderately-honest response; whereas an AI will confidently and convincingly lie to you.

[0] https://news.ycombinator.com/item?id=41991750


I am quite interested in how LLMs would handle game development. Coming to game development from a long career in boutique applications and also enterprise software, game development is a whole different level of "boutique".

I think both because of the coupled, convoluted complexity of much game logic, and because there are fewer open source examples of novel game code available to train on, they may struggle to be as useful.


It is a good example of how we are underestimating the human in the loop.

I know nothing about making a game. I am sure LLMs could help me try to make a game but surely they would help someone who has tried to make a game before more. On the other hand, the expert game developer is probably not helped as much either by the LLM as the person in the middle.

Scale that to basically all subjects. Then we get different opinions on the value of LLMs.


Yeah I think the lack of game code available to train on could be a problem. There's a fair amount of "black art" type problems in games too that a LLM may struggle with just because there's not a lot to go on.

Additionally the problems of custom engines and game specific patterns.

That being said there's parts of games with boilerplate code like any application. In a past game as I was finishing it up some of this AI stuff was first becoming useable and I experimented with generating some boilerplate classes with high level descriptions of what I wanted and it did a pretty decent job.

I think some of the most significant productivity gains for games is going to be less on the code side and more in the technical art space.


> A lot of the concerns you describe make me think you work in a larger company or team and so both the organizational stakes (maintenance, future changes, tech debt, other people taking it over) and the functional stakes (bug free, performant, secure, etc) are high?

The most financially rewarding project I worked on started out as an early stage startup with small ambitions. It ended up growing and succeeding far beyond expectations.

It was a small codebase but the stakes were still very high. We were all pretty experienced going into it so we each had preferences for which footguns to avoid. For example we shied away from ORMs because they're the kind of dependency that could get you stuck in mud. Pick a "bad" ORM, spend months piling code on top of it, and then find out that you're spending more time fighting it than being productive. But now you don't have the time to untangle yourself from that dependency. Worst of all, at least in our experience, it's impossible to really predict how likely you are to get "stuck" this way with a large dependency. So the judgement call was to avoid major dependencies like this unless we absolutely had to.

I attribute the success of our project to literally thousands of minor and major decisions like that one.

To me almost all software is high stakes. Unless it's so trivial that nothing about it matters at all; but that's not what these AI tools are marketing toward, are they?

Something might start out as a small useful library and grow into a dependency that hundreds of thousands of people use.

So that's why it terrifies me. I'm terrified of one day joining a team or wanting to contribute to an OSS project - only to be faced with thousands of lines of nonsensical autogenerated LLM code. If nothing else it takes all the joy out of programming computers (although I think there's a more existential risk here). If it was a team I'd probably just quit on the spot but I have that luxury and probably would have caught it during due diligence. If it's an OSS project I'd nope out and not contribute.


adding here due to some resonance with the point of view.. this exchange lacks crucial axes.. what kind of programming ?

I assume the parent-post is saying "I ported thousands of lines of <some C family executing on a server> to <python on standard cloud environments>. I could be very wrong but that is my guess. Like any data-driven software machinery, there is massive inherent bias and extra resources for <current in-demand thing> in this guess-case it is python that runs on a standard cloud environment with the loaders and credentials parts too perhaps.

Those who learned programming in the theoretic ways know that many, many software systems are possible in various compute contexts. And those working on hardware teams know that there are a lot of kinds of computing hardware. And to add another off-the-cuff idea, so much web interface ala 2004 code to bring to newer, cleaner setups.

I am not <emotional state descriptor> about this sea change in code generation, but actually code generation is not at all new. It is the blatent stealing and LICENSE washing of a generation of OSS that gets me, actually. Those code generation machines are repeating their inputs. No authors agreed and no one asked them, either.




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: