Hacker News new | past | comments | ask | show | jobs | submit login

> Most of webdev has been done 1000000x before

Not according to your very specific stakeholder demands / environment/naming/data tables/data protection requirements, otherwise you would just use a library.

Those might seem like trivial differences but plenty of things go wrong there, plenty enough that you can't just use a library instead of a programmer, and then they are plenty enough of such errors that vibe coding will also cause issues.




I think the point being made is that even though the specific set of requirements may be unique in a stakeholder basis, all the components already exist, and have been combined in many ways. So it really boils down to prompting in such a way that the right set of components are brought together in the right way. That's where the skill now lies.


> So it really boils down to prompting in such a way that the right set of components are brought together in the right way. That's where the skill now lies.

And how is this different from just calling the libraries in the right way to make it adhere to stakeholder requirements?

The statement isn't "its impossible to get an AI to print the code for a right program", but "the work and skills you need to get an AI to print the right program is as much or more than to do it yourself.". That seems to be true for all but trivial programs. Here trivial means you can download a git repo and change some variables to get that result.


> And how is this different from just calling the libraries in the right way to make it adhere to stakeholder requirements?

In that you need humans that can understand stakeholder requirements, constraints of the ___domain, and limits of existing software, so that they can write the necessary glue to make everything work.

Thing is, LLMs know more about every ___domain than any non-expert (and for most software, "___domain experts" are just non-___domain-expert programmers who self-learn enough of it to make the project work), and they can understand what stakeholders say better than other humans can, at least superficially. I expect it won't take long until LLMs can do all of this better than an average professional.

(Yes, it may take a long time before an LLM can replace a senior Googler. But that's not the point. It's enough for an LLM to replace an average code monkey churning out mobile apps for brands, to have a huge chunk of the industry disappear overnight.)


Paul Graham argued that if you act like the average startup, you'll get the same results as the average startup. And the average startup fails.

It follows that if you want to have success, you need to do something new which hasn't been done before.

> LLMs know more about every ___domain than any non-expert

As soon as you're creating something new, or working in a niche field, LLMs struggle.

So do junior developers. But they learn and get better with time. While onboarding a junior developer requires more effort than doing the work yourself, it's worth it in the long run.

IMHO, that's the largest issues LLMs have today. They can't really adapt and learn "in the field". We build a lot of workarounds with memory to circumvent that, but that too only works until the memory exceeds the context.

I've tried using ChatGPT, Copilot, custom GPT 4o models and Cursor. The task they did best at was generating a simple landing page (though they struggled with tailwind 4, cursor spent almost 8 hours debugging that issue).

With tasks that require more niche ___domain knowledge, it went much worse. Cursor finished some of the tasks I gave it, but it took over 10x more time than a junior developer would've spent, and I had to constantly babysit it the entire time, providing context, prompting, writing cursor rules, prompting again, etc. The others failed entirely.

If I start working on an unfamiliar task, I read all the docs, write some notes for myself, maybe build some sample projects to test my understanding of the edge cases. Similarly, if faced with a new task, I build some small prototypes before committing to a strategy for the actual task.

Maybe ML agents would fare better with that approach, instead of today's approach of just creating a mess in the codebase like an intern.


It's a switch of focus. Instead of being occupied by the grunge work of integrating libraries and code logic from the outset, now focus can remain on the larger picture for longer, with a need to jump into raw code only for the really tricky/unique problems, if there are any. That can be a lot of overheat avoided, if done well.




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: