Hacker News new | past | comments | ask | show | jobs | submit login

It's tough to generalize about "AI code", there's a huge difference between "please make a web frontend to this database that displays table X with some ability to filter it" and "please refactor this file so that instead of using plain strings, it uses the i18n api in this other file".





What is the difference between them? Both seems like quite trivial implementations?

trivial doesn't mean the AI will get it right. A trivial request can be to move an elephant into a fridge. Simple concept right?

Except AI will probably destroy both the elephant and the fridge and order 20 more fridge of all sizes and elephants for testing in the mean time (if you're on MCP). Before asking you that if you mean an cold storage facility, or if it is actually a good idea in the first place


Okay, but which one of the two is the elephant-destroying one?

probably both but the AI won't tell you until it's destroyed many elephants.

It won’t tell you at all, until you tell it. And then it’ll say “you’re absolutely right, doing this will destroy the fridge and the elephant”.

Except "I" won't, and there will be a lot of proverbial elephants in fridges at all levels of project, in the design, in security etc.

They’re inherently very different activities. Refactoring a file assumes you’ve made a ton of choices already and are just following a pattern (something LLMs are actually great at). Building a front-end from nothing requires a lot of thought, and rather than ask questions the LLM will just give you some naive version of what you asked for, disregarding all of those tough choices.

Yeah these are both extremely basic great use cases for LLM-assisted programming. There’s no difference, I wonder what the OP thinks that is.

Disagree. There is almost no decision making in converting to use i18n APIs that already have example use cases elsewhere. Building a frontend involves many decisions, such as picking a language, build system, dependencies, etc. I’m sure the LLM would finish the task, but it could make many suboptimal decisions along the way. In my experience it also does make very different decisions from what I would have made.

The AI will pick the most common technologies used for this purpose, which is both "good enough" and also what people generally do at scale (for this exact reason).

This was the promise of no-code. ”All apps are crud anyway”, ”just build for common use cases” etc. This didn’t turn out to be true as often as predicted. If averaging people’s previous decisions was truly a fruitful way, we’d have seen much stronger results of that before AI.

Building even a small a web frontend involves a huge number of design decisions, and doing it well requires a detailed understanding of the user and their use-cases, while internationalisation is a relatively mechanical task.

And that’s kind of decision making is what’s important. More often than not, you ask someone to explain their decision making in building a feature, and what you get is “I don’t know really”. And the truth is that they have externalized their thinking.

Damn I didn’t see your comment and wrote basically the same thing. Great minds think alike I guess. Oh well..



Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: