Hacker News new | past | comments | ask | show | jobs | submit login

I work at a popular Seattle tech company. and AI is being shoved down our throats by leadership. to the point it was made known they're tracking how much devs use AI and I've even been asked when I'm personally not using it more. and I've long been a believer in using the right tool for the right job. And sometimes it's AI, but not super often

I spent a lot of time trying to think about how we arrived here. where I work there are a lot of Senior Directors and SVPs who used to write code 10+ years ago. Who if you would ask them to build a little hack project they would have no idea where to start. And AI has given them back something they've lost because they can build something simple super quickly. But they fail to see that just because it accelerates their hack project, it won't accelerate someone who's an expert. i.e. AI might help a hobbyist plant a garden, but it wouldn't help a farmer squeeze out more yield.




> just because it accelerates their hack project, it won't accelerate someone who's an expert.

I would say that this is the wrong distinction. I'm an expert who's still in the code every day, and AI still accelerates my hack projects that I do in my spare time, but only to a point. When I hit 10k lines of code then code generation with chat models becomes substantially less useful (though autocomplete/Cursor-style advanced autocomplete retains its value).

I think the distinction that matters is the type of project being worked on. Greenfield stuff—whether a hobby project or a business project—can see real benefits from AI. But eventually the process of working on the code becomes far more about understanding the complex interactions between the dozens to hundreds of components that are already written than it is about getting a fresh chunk of code onto the screen. And AI models—even embedded in fancy tools like Cursor—are still objectively terrible at understanding the kinds of complex interactions between systems and subsystems that professional developers deal with day in and day out.


My experience has gotten better by focusing on documenting the system (with ai to speed up writing markdown). I find reasoning models quite good at understanding systems if you clearly tell them how it works. I think this creates a virtuous circle where I incrementally write much more documentation than I ever had the stomach for before. Of course this is still easier of you started greenfield buts allowed me to keep claude 3.7 in the game even as the code base is now 20k+ lines.


> even as the code base is now 20k+ lines.

That's better than my past experience with hobby projects, but also nowhere near as big as the kinds of software systems I'm talking about professionally. The smallest code base I have ever worked on was >1M lines, the one I'm maintaining now is >5M.

I don't doubt that you can scale the models beyond 10K with strategies like this, but I haven't had any luck so far at the professional scales I have to deal with.


I've found claude-code good in a multi-million line project because it can navigate the filesystem like a human would.

You have to give it the right context and direction — like you would to a new junior dev — but then it can be very good. Eg.

> Implement a new API in `example/apis/new_api.rs` to do XYZ which interfaces with the system at `foo/bar/baz.proto` and use the similar APIs in `example/apis/*` as reference. Once you're done, build it by running `build new_api` and fix any type errors.

Without that context (eg. the example APIs) it would flail, but so would most human engineers.


Well I have also worked on systems of multiple millions of lines, well pre-llm, and I sure as he'll didn't actively understand every aspect of it. I understood deeply the area I work on and the contracts with my dependencies as well the contracts I provide. I also understand the overall architecture. We'll see how it goes if my project grows to that point, but I believe by clearing documenting those things, and overall focusing on low coupling, I can keep the workflow I have now, but with context loading for every session. Time will tell.

In general though, its been a lot of learning on how to make llms work for me, and I do wonder if people simply dismiss too quickly because they subconsciously don't want them to work. Also "llm" is too generic. Copilot with 4o sucks but claude in cursor and windsurf does not suck.


I’m using it to ship real projects with real customers in a real code base at 2x to 5x the rate I was last year.

It’s just like having a junior dev. Without enough guidance and guardrails, they spin their wheels and do dumb things.

Instead of focusing of lines of code, I’m now focusing on describing overall tasks, breaking them down, and guiding an LLM to a solution.


Cool anecdote, for me it has slowed me down 8x to 23x since I started using it in real projects with real customers in a real code base last year.

So 1-1 in pointless personal anecdotes. Now show us the numbers! How did you measure this? Can u show 2x/5x increase in projects/orders/profits/stock price?


I'm not really sure I understand your counter argument. Pretty much everything about personal productivity is anecdotes because it's so uniquely tied to an individual. I showed you my numbers - I am 2x to 5x faster at delivering projects.


The point is that leadership gets to write on their own promo document / resume about how they "boosted developer productivity" by leading the charge on introducing AI dev processes to the company. Then they'll be long gone onto the next job before anybody actually knows what the result of it was, whether it actually boosted productivity or not, whether there were negative side-effects, etc.


Aye - this is a limitation of the current tech. For any project greater than 1k lines where the model was not pretrained on the code base…. AI is simply not useful beyond documentation search.

It’s easy to see this effect in any new project you start with AI, the first few pieces of functionality are easy to implement. Boilerplate gets written effortlessly. Then the ai can’t reason about the code and makes dumb mistakes.




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: