Hacker News new | past | comments | ask | show | jobs | submit login

There was a moment when google introduced autocomplete and it was a game changer.

LLMs are still waiting for their autocomplete moment: when they become an extension of the keyboard and complete our thoughts so fast, that i could write this article in 2 minutes. That will feel magical.

The speed is currently missing




For me the speed is already there. LLMs can write boilerplate code at least an order of magnitude faster than I can.

Just today I generate U-Net code for a certain scenario. I had to tweak some parameters, at the end I got it working in <1hr.


> LLMs can write boilerplate code at least an order of magnitude faster than I can.

This is my biggest fear with everyone adopting LLMs without considering the consequences.

In the past, I used "Do I have to write a lot of boilerplate here?" as a sort of litmus test for figuring out when to refactor. If I spend the entire day just writing boilerplate, I'm 99% sure I'm doing the wrong thing, at least most of the time.

But now, junior developers won't even get the intuition that if they're spending the entire day just typing boilerplate something is wrong, instead they'll just get the LLM to do it and there is no careful thoughts/reflections about the design and architecture.

Of course, reflection is still possible, but I'm afraid it won't be as natural and "in your face" which kind of forces you to learn it, instead it'll only be a thing for people who consider it in the first place.


The question to ask yourself is what value one gets from that refactor. Is the software faster? Has more functionality? Cheaper to operate? Reduce time to market. These would be benefits to the user, and I'd venture to say the refactor does not impact the user.

The refactor will impact the developer. Maybe the code is now more maintainable, or easier to integrate, or easier to test. But this is where I expect LLMs will make a lot of progress - - they will not need clean, well structured code. So the refactor, in the long run, is not useful to a developer with an LLM sidekick.


I agree with you but as a thought exercise: does it matter if there is a lot of boilerplate if ultimately the code works and is performant enough?

Fwiw, most of the time I like writing code and I don't enjoy wading through LLM-generated code to see if it got it right. So the idea of using LLMs as reviewers resonates. I don't like writing tests though so I would happily have it write all of those.

But I do wonder if eventually it won't make sense to ever write code and it will turn into a pastime.


> I agree with you but as a thought exercise: does it matter if there is a lot of boilerplate if ultimately the code works and is performant enough

Yeah it matters because it is almost guaranteed that eventually a human will have to interact with the code directly so it should still be good quality code

> But I do wonder if eventually it won't make sense to ever write code and it will turn into a pastime

Even the fictional super-AI of Star Trek wasn't so good that the engineers didn't have to deeply understand the underlying work that it produced.

Tons of Trek episodes deal with the question of "if the technology fails, how do the humans who rely on it adapt?"

In the fictional stories we see people who are absolute masters of their ___domain solve the problems and win the day

In reality we have glorified chatbots, nowhere near the abilities of the fictional super-AI, and we already have people asking "do people even need to master their domains anymore?"

I dunno about you but I find it pretty discouraging


> I dunno about you but I find it pretty discouraging

same :)


Or: An important thing that was taught to me on the first day of my C++ class was ‘no project was ever late because the typing took too long’.


No, that is not the only thing missing. Right now AI continues to make mistakes a child learns not to make by age 10…don’t make shit up. And by time you’re an adult you figure out how to manage not to forget (by whatever means necessary) the first thing your boss told you to do after he added a new ask.


That's not entirely true. AI can solve math puzzles better than 99.9% of population.

Yes AI makes mistakes, so do humans very often.


Humans make mistakes, sure, but if a human starts hallucinating we immediately lose trust in them.

> AI can solve math puzzles better than 99.9% of population

So can a calculator.


> AI can solve math puzzles better than 99.9% of population

I've studied electronic engineering and then switched to software engineering as a career, and I can say the only time I've been exposed to math puzzles were in academic settings. The knowledge is nice and help with certain problem solving, but you can be pretty sure I will reach out to a textbook and a calculator before trying to brute-force one such puzzle.

The most important thing in my daily life is understand the given task, do it correctly, and report about what I've done.


Yep this exactly. If I ever feel that I'm solving a puzzle at work, I stop and ask for more information. Once my task is clear, I start working on it. Learned long ago that puzzle solving is almost always solving the wrong problem and wasting time.


Maybe that's where the disconnect comes from. For me understanding comes before doing. And coding for me is doing. There may be cases when I assumes that my understanding was complete and the feedback (errors during compiling and testing) told me I'm wrong, but that just triggers another round of research to seek understanding.

Puzzle solving is only for when information are not available (reverse engineering, closed systems,...) but there's a lot of information out there for the majority of tasks. I'm amazed when people spend hours trying to vibe code something, where they could spend just a few minutes reading about the system and comes up with a working solution (or find something that already works).


> so do humans very often.

While I do see this argument made quite frequently, doesn't any professional effort center in procedures employed particularly to avoid mistakes? Isn't this really the point of professional work (including professional liabilities)?


> There was a moment when google introduced autocomplete and it was a game changer.

I remember.... I turned it off immediately.


But why should I put my time into reading and thinking about your article if you didn't think it worth your time to actually think about and write it?

Hope the Next Big Thing (TM) is the Electric Monk.


> The speed is currently missing

I feel like the opposite/something else is missing. I can write lots of text quickly, if I just write down my unfiltered stream of thoughts, both together with and without an LLM, but what's the point?

What takes really long time, is making a large text contain just the important parts. Saying less takes longer time, at least for me, but hopefully saves the time and effort for people reading it. At least that's the idea.

The quote "If I had more time, I would have written a shorter letter" comes to mind.


Unfiltered stream of thoughts: I wrote a lengthy note that is quite incoherent and disorganized, and I intend to use the LLM to organize it. Ugh.


But who will actually read the “autocompleted” text?

At that point any other human being will likely also have one to scan incoming text.


You can do this in Cursor already. Write a README or a SPEC file and the LLM will try to "complete your thoughts", except that it's off often enough. I find this hugely distracting, as if someone always interrupts your train of thought with random stuff.


For me, it’s the opposite – speed is there, intelligence is still lacking sometimes.

I’m OK waiting 10 minutes with o1-pro, but I want a deep insight into the issue I’m brainstorming. Hopefully GPT-5 will deliver.


Great idea! And scary.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: