Hacker News new | past | comments | ask | show | jobs | submit login

In my experience, worse work typically gets done a lot more slowly than good work.



I think it depends on the timescale. In the short term, a 'bad' hacky solution is usually quicker than a 'good' one, but it will likely slow you down over the medium to long term.

If the timing of that bad hacky solution being shipped is critical, then it's actually the perfect solution and the right work was done quickly, which is great. If it's not, and is rather the first piece of a new feature intended to last for a while, then the work on that feature as a whole may be slow as a result, so then it looks more like bad and slow work.

I think with experience comes the ability to choose the appropriate approach for the problem at hand, whereas inexperience will usually lead to the short term quickest easiest route being chosen every time by default.

Another subtlety is that, typically, more experienced people will actually deliver the 'good' solution faster, which results in a double win if that is indeed the correct course of action.


I find writhing a quick hacky solution and then fixing it over time (IF it's worth it, alternatively a rewrite might happen) is quicker because I don't usually write what I want/need on the first try.

If I want to end up with a great end result, I find this is the way I have to work.


These discussions tend to assume in-house development though.

I've spent far too much time around organisations where "more time with bums on seats" = "direct correlation to billing more", which = "better work" if you ask the right people. That leads to an unfortunately different scenario.


Defense contracting. It's all about billing hours against a contract. As long as I'm doing that, I'm making the company money.


And that's how you get code from company A that's performing sanity checks on a parts of a file that are outside the scope of the spec the data in that file is supposed to meet failing content that meets spec but has had all superfluous information discarded by a custom compression algorithm that's designed by company B to improve performance on a network with insane latency.

That was a fun bug to track down.


This is my experience as well, I've never seen slow result in good.


I don't think they mean slow development. Rather, overall rate - all other things equal, someone who insists on no more than a 40 hour week will accomplish less than someone willing to work 60-80 based on whatever metric you use (KLOC, story points, etc)


I've only ever heard one person ever mention KLOC's, and they were in their 60's nearly a decade ago. They were explaining how people were using it as a measurement of work done in the 80's, and what a terrible idea it was to measure throughput that way.

You're the second person. When was the last time you encountered anyone using KLOC's (per 1k lines of code) as a measurement of work done? Are they working on mainframe codebases?


> You're the second person. When was the last time you encountered anyone using KLOC's (per 1k lines of code) as a measurement of work done? Are they working on mainframe codebases?

The DOD and their "experts" love measuring software projects this way. They even take the cost/LOC ratio as a measure of value. I was once semi-seriously chastised for committing a change with net negative LOC because it broke the formula and implied negative value.


Me too (also DoD contracting). Cleaned up a module full of amateurish over-complicated code resulting in a negative LOC count on the boss's spreadsheet. Oh, I heard about it.


I like to use LOCs to see at a glance the size of a project. A more complex business logic usually takes more LOCs to write it down. And I bet there is a strong correlation between LOCs and hours worked on a project which would make LOCs a good approximation of work done. Actually I should research it. Should be not too hard to find all the tickets for projects and compare the time logged to LOCs.


1) There is no fixed correlation between LOCs and business logic complexity. 2) there is actually an inverse correlation between LOCs and amount of work done. When I'm most productive I actually decrease the LOCs.


1) You tell me that if you take some fixed business logic and add some exceptions to it (make it more complex) you wont need more LOC's to address this added complexity? I don't believe you.

2) You can't be productive having 0 LOCs. So 0 LOCs = 0 productivity. n>0 LOCs = (presumable) some productivity. I would say we have a trend.


1) I said that there is no fixed correlation. 2) I'm most productive when my LOCs are negative, deleting duplicate code or finding a better approach for a problem.


1) so you say there is a (positive) correlation, but it is not "fixed". Sorry, I don't understand what this means.

2) I know what you mean. I do it often myself and I know it improves code quality. It's not my intention to measure "work done" in LOCs, which, as you say, could be contradictory. I propose to measure the size of a project in LOCs. Suppose, you have two projects. And you have all the time you want to reduce their LOCs (to improve code). After you are done, both projects will still have some LOCs. My point is that the project, which has more LOCs, after you had your fun with it, is the "bigger"/more complicated project.


1) Imagine some project written using CTRL+C, CTRL+V everywhere, another much bigger project written with some good architecture and using always the DRY principle, another one still bigger is written in a much more compact language like F# or Haskell. The first project following your criteria is the bigger and most complex, but in reality it is just a bloated mess. 2) It never works like that in reality. No one is paid to spend time only to improve the code, we are paid to provide value to the users. The project with the good architecture is easier to work with and it will be easier to add features and continue to improve the code. The bloated mess will be much more difficult to work with and you won't have the time to improve it apart from getting the low hanging fruits.


I think LOC probably works OK as a very crude metric until you start to measure it. Once you start to measure it all hell breaks loose. If you measure me on lines of code, well I can get you lots of lines to code. No problem at all...


I seem to recall a quote from some computer scientist, possibly Knuth but I'm not sure, that went something like 'You optimize what you measure.' So if you measures LOC, you get LOC... and far more of them than is by any reason 'necessary.' Really a horrendous metric.


There is no universal metric to gauge productivity of software engineers.

The added value a software engineer produces with his work in a single unit of time has a huge variance. Furthermore, it can range from negative to positive.

I.e, it does not matter how long one works. The only thing long work hours of a software engineer communicate is that they work long hours.


I think his argument still applies to an "overall" metric.


Are you saying the 40 hour developer has a lower code quality than the 80 hour developer?




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: