Hacker News new | past | comments | ask | show | jobs | submit login

For me it works pretty well for python scripts that pull data from somewhere and do something with it. Even the code quality is quite ok often. But for larger complex projects it doesn't work for me. It produces a lot of unmaintainable code.

But I can easily see a not so distant future where you don't even have to look at the code anymore and just let AI do its thing. Similar to us not checking the assembly instructions of compiled code.




You don't have to check the assembly instructions of compiled code because armies of compiler engineers sweat the details of making damn sure that every single step of compilation is done by algorithms that are correct. And they fail often enough.

I'm not saying that what you claim is entirely impossible, but it would require some major inventions and a very different approach to how ML is implemented and used compared to what's happening today. And I'm not convinced that the economics for that are there.


An LLM is not a compiler, where extreme pains are taken to be correct and consistent. LLM coding output will not always need to be checked, depending on the use case, but it definitely will not be consistently correct across all use cases.




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: