Hacker News new | past | comments | ask | show | jobs | submit login
Software development topics I've changed my mind on after 6 years in dev (chriskiehl.com)
85 points by mikro2nd on July 20, 2022 | hide | past | favorite | 58 comments



After 15 years, I no longer judge past coding decisions, and I think you can save yourself a lot of grief by not waiting as long as I did to realize this.

You don’t know the constraints the previous person was under, so you can’t possibly know what you really would have done or what “should” have been done.

I mention this because there’s still a tone of, “there is approximately one right way to code” in this article, and I really doubt that.

Make good choices now, but don’t be harsh to the folks who came before you.


this. 1000x this.

I work in tech diligence, I talk to founders about how their project got the way it is when they are selling it, and what the potential ramifications of those constraints and the resulting decisions are for the acquirer. Most developers have little to no understanding of the constraints under which businesses are built and are quick to complain that code must have been made by idiots without thinking of all the drivers that might have created a situation where non-optimal code was a good (or the only possible!) decision.

Tech debt absolutely kills companies - I've seen bad tech debt torpedo potential acquisitions or result in very significant changes in valuation - but not as much as does running out of money.


this. 1000x this.

Conway's Law says the structure of code mirrors the structure of the organization that produced it. There's a correlary to that somewhere that says the financial history of an organization is embedded in the structure of the code it writes.

I've also started having problems with the phrase "Technical Debt." Talking with a non-techincal co-founder once about why they had so much technical debt, their concept was "debt is great! we cut corners today and pay it back when we get more funding!" I had to break it to him that unlike regular debt, where it's easier to structure budgets to meet interest payments, technical debt reduces velocity making it harder and harder to reliably deliver software on time and on budget. The interest payment on technical debt is super-polynomial, not linear.


>their concept was "debt is great! we cut corners today and pay it back when we get more funding!"

They don't understand that the interest rate for technical debt is worse than any loan shark. At least that's what my estimate is. I thought this was a sufficiently interesting question that I made an Ask HN thread [1]

  [1] - https://news.ycombinator.com/item?id=32175184


Tech debt is basically owing Don Corleone a favor.

You don't know what you're going to have to do or when, but you do not want to take that obligation lightly.


Maybe the answer is not the term "debt" but explaining the currency that you're paying the "debt" in?

In the end, how did the non-technical founder take it?


Yup. We started usi the term "Development Molasses" and the non-tech people started to grok what was happening. I think the product people grokked it since our discussions about what's in the backlog and code quality were much more in-depth. So you can teach marketing people things, you just have to use jargon they can understand.


I agree.

I was taught that computers don't care how you format things, where you place things, what language or principles you use, etc.. All of that is for OTHER developers.

The machines do as instructed, but all the fluff around coding is so other developers can see what you were trying to tell the machine to do.


> Micro-services require justification

Since the micro-service craze started, I've been yelling this from the rooftops. Making everything a micro-service by default is pure insanity. The overhead to maintain them requires exponentially more resources than a monolith or a few services. A few weeks before I left my last job, a very high level engineer gave a presentation where he stated, in all seriousness, that we should be producing one micro-service per day with the goal to be two per day after one year. I didn't need any motivation to leave but that would have done the trick.


You should have marked this comment with a warning. "One micro service per day" is the most ridiculous thing I have heard in a long time and I also threw up a little.

Good for you that you left though :-)


One micro service a day keeps real work away !


"Lines of Code per day are dead! Long live Micro-services per day!"


I feel like most of the best paying companies are emphasizing microservices (except maybe Google). This is just my impression and I realize it's probably not accurate.

What are some solid companies with competitive pay that only use microservices when the need arises instead of by default, using monoliths otherwise?


The problem is that the term “microservices” was so much catchier than “service oriented architecture” that people went overboard with it. Macroservices are a much better idea imo.


> Code coverage has absolutely nothing to do with code quality

This absolute is somewhat misplaced in my experience. Yes hitting 100% code coverage does not inherently make the code better, but it does force people to look up the tests that were there before and it often helps with providing some form of context.

"Code coverage is only loosely correlated with code quality" would be my take.


Also see "Why most unit testing is waste" and "Segue" by Jim Coplien.

The main takeaway is that if you test your code on the wrong ("unit") level, you may easily get ~100% coverage, but that is mostly worthless. Test on higher levels instead, ie. against business requirements, and don't worry about code coverage that much.


Even weirder... I've come to think code coverage may be related more to security / safety than correctness. People often think they're buying safety when they go from 80% coverage to 100% coverage, and I don't think that's always true. And while it's a laudable goal, there are probably better ways to confirm safety than assuring you have 100% code coverage.

We've noticed that you can still have all sorts of problems in code that has 100% coverage. We started using a TLA+ like "environment" to automagically prove assertions about our code (mostly things like "there's only one path that gets you to this counter := counter + 1 line and there's a mutex around it.") It's a pain in the rear, but probably less of a pain in the rear than trying to prove that sort of thing with traditional tests and an exhortation that you have to have 100% coverage.


In my experience, mutation testing is an amazing tool to discover code paths not actually covered by tests, regardless of the coverage score.


Code coverage can show the absence of good tests, but not their existence.


I think some of these changes in attitude are a natural result of progression as a developer. For instance, when you're just starting out, having a good project manager or business analyst talk to users is valuable, but as you get more experienced you can do a pretty good job of it yourself. Likewise DRY is a good principle to remember when you're starting out, to prevent bad beginner mistakes like writing 1000-line methods instead of breaking the problem down into smaller pieces. But as you get more experienced writing good modular code becomes second nature and instead you become more aware of the cases when the DRY principle gets used to justify over-engineering.


>clever code isn't usually good code. Clarity trumps all other concerns.

this is a good one to pick up early. Looking forward to my 20 in dev.


I will write “clever” code to put constraints on the code. Clever code is hard to change and read, which is sometimes what you actually want. Sometimes you actually want anyone who wants to change a bit of complex logic to fully understand what they are changing before making that change. IOW, the cleverness is there for a reason.


Code might be useful way after original constraints / platforms / compilers / requirements / reasons / etc are long gone.

If you have a tradeoff between something today and clarity you might thank yourself later that you didn't went for it and kept code amenable to change. Or as with any tradeoff you can be right going for that cleverness.

I have learned that I want to err on clarity side not on cleverness side but ymmv.


Yeah, I agree with that. Knowing when to break the rules and having reviewers agree with it is def a very rare thing. I think I’ve done this only a handful of times in the last 20 or so years.


> Code coverage has absolutely nothing to do with code quality.

Certainly you can go wrong by chasing high code coverage, yet I have only one application that I have written that has had zero bug reports filed against it. On a whim I decided to try for 100 percent coverage and in the process found a small pile of bugs. Since then nobody else has found any. At least by that metric it is high quality.


This seems right, Never have I not found bugs while chasing code coverage. When I've pushed for higher code coverage, I usually find some bugs. General Code Review becomes easier when you can see that the person tested all of the edge cases and has a clean 100% coverage on the CR. When the code gets deployed I can generally trust that it's going to work, or get rolled back in a staging environment.

It's certainly possible to make high quality code without tests (see Linux), but simply taking the time and writing tests seems to accelerate development.


This is fine if people are diligent about writing good tests.

If you enforce coverage on a busy team with less experienced or less diligent engineers, you're going to get a lot of "tests" that just pass for coverage purposes and gives false confidence in your code and doesn't gain you anything.


Fair, it's easy to get execution without validation, but ultimately this is where code review matters. The (huge) company that I work at now is the only one that's ever strongly encouraged test coverage, with a general expectation of 85% coverage and 95%+ encouraged.

It's also the only company that I've worked at where you can commit code and get up from your desk.


Innocent question: if nobody found those bugs, what value does it add to fix them?


- Bugs are often found and unreported, leaving the user with a general impression of janky software and damaging your word-of-mouth efforts.

- Some bugs aren't triggered often but are still bad when they happen. Sorting in Python and Java/Android was broken for ages, but not in a way you'd hit often.

- In keeping with the above point, failure rates are exponential in the complexity of your software in the presence of buggy components. Future changes and additions will be problematic on a buggy foundation.

- Many bugs go unnoticed while still causing damage. You know that kind of "oh shit" moment when you compare the last year of payments in one database to another and find duplicates or missing records? You're not sure yet what went wrong, but _something_ definitely wrecked a lot of people's days.

- Similarly with anything involving proper subnormal support, bit-accurate transcendentals, .... If you don't have a good idea what the result was supposed to look like and it was built on a numerical house of cards then you might get lucky most of the time, but some of your outputs are going to be subtly wrong in ways that definitely hurt the business and probably won't be picked up on for ages.

And so on. Maybe it's not worth fixing the bugs in some piece of software (balancing constraints and all that), but there's definitely value to be had.


What will happen when people exploit the bugs that were not found?


> People who stress over code style, linting rules, or other minutia are insane weirdos

Does anyone really stress over these? Everywhere I’ve used them it’s always been use black/flake8/eslint/prettier with mostly out of the box settings in order to get some low hanging fruit standardisation of the code to avoid people stressing/arguing over them.


I stress about these if they are not already automated. It’s sooooo simple for a coworker relationship to sour over style.


I've come to the conclusion that if the code style check is not automated, then it doesn't exist. It's fine to suggest naming changes etc. but manual review of simple stuff like multi-line style or comment style is a waste.

If a code style check is worthwhile then it should be PR-able into one of the mainstream automated code linters, style checkers, or formatters.


I have worked with people that did, and while rare it is very painful.

In one case the person had seniority over me and kept "refactoring" our codebase in a team of 8. You'd write a function that worked one day and come back the next and it was broken because "I changed the name of that function because it did sound right".

Some people are weirdos and nothing short of their manager telling them to STFU and do their job will stop them.


In the early 90s at Convex, we had a tool that would reformat code for your personal style after it was checked out of the repo and then reformmated back to our corporate style before checking it back in. It turns out it cost less to develop this tool than to absorb the daily productivity hit which came from people spending their day arguing code style instead of working.


I guess it used to be more of a problem before languages started shipping their own linters and code styles (like Rust does), or agreed to use a single tool. Kind of a legacy artifact.


Not sure I agree. As my eyesight degrades over time, I really like the idea of putting a space between a paren and a parameter in function calls / definitions.

  // I prefer to look at this:
  function example( whatever ) {
    :
  }

  // Instead of this:
  function example(whatever)
  {
    :
  }
And I don't think it's appropriate to force everyone who uses your language to adhere to the same style. If you want to enforce a style, just encode that style in the syntax of the language.

Also... my dyslexic coders tend to hate long lines of code, so we sometimes do lispy things like this:

  const foo = whatever(
    "this is a big long parameter",
    ( 2 + bar ) * 90,
    i,
    process.env.HOME || `/mnt/${process.env.USER}`
  );
instead of this:

  const foo = whatever( "this is a big long parameter", ( 2 + bar ) * 90, i, process.env.HOME || `/mnt/${process.env.USER`);
And there's nobody's style that allows the former.


Not quite true. Python PEP 8, kind of a core in Python style, states that lines should be no longer than 79 characters. I personally, and I believe many current Python developers use a more realistic 120 character line length.. But, in Python, long lines are frowned upon, and your multi-line function call is the preferred approach.


I did not know that. I've sort of always appreciated the python community, even though python leaves me cold.


You'd be surprised. I worked at a place that forked a couple of those tools and used the "in-house" versions since they did not match the desired code style.


That sounds like a giant red flag.

Was it as bad as my spidey senses are telling me?


It was. I left pretty quick.


Agree strongly with most of the points.

In terms of interviews, I prefer to give a problem decomposition question: e.g. class design of a real-simple real-world system based on rough requirements

Primary goal for me in interviewing is to

1. Filter out candidates which lack the skill or willingness to get things done

2. Identify potential red flags which would be a risk. Most red flags are communication or attitude. Knowledge gaps can be filled.

Things I'm testing for:

* able to follow rough instructions top sequentially break down problem into multiple layers of abstraction

* naming - unambiguous and conveying purpose

* attitude - easy to collaborate with? do they appreciate/enjoy new challenges?

I'm certainly not perfect but I have a pretty high degree of confidence on sorting most people's level and fit based on a 1 hr interview

> junior (IC1): difficulty separating concerns / naming, struggles with use cases. Needs to be very coachable.

> junior-to-mid: has some experience, takes feedback well. needs a lot of coaching. Gaps in knowledge. Would need mentorship

> mid (IC2): competent at programming/data structures. Able to implement a specification but need coaching turning requirements into specification. Thoughts or code may not be very organized. mid-senior: organized. Makes fast progress. Needs minimal coaching to finish problem before time. Cares about naming and names things well.

> senior (IC3): able to solve problem quickly (30 mins to finish problems which would take 1 hr for mid-level). Intuitively/quickly understands problem without need for much coaching.

> lead/principal/architect (IC4): IC3 with greater maturity. Make take longer than IC3 because of greater detail and thought. Spends extra time clarifying requirements, testability and testing. Very thorough yet able to complete within time allotted.


Very solid guidelines, generic communication/organizational skills cannot be understated.


I think my biggest change as a dev isn't so much a concrete opinion, but a feeling. The words "I invented a..." used to be inspiring, now they fill me with dread.

I used to always be on the lookout for places where I could build something better than what's out there. And almost every one of those became a liability more than an asset.

In the industrial revolution and surrounding time periods there was a big shift from made to fit to made to measure.

Previously you'd have a screw you made specifically for this project, all that mattered was that it fit the threads. Now you have an M3 screw, and what matters is that it fits any M3 nut.

I always go for made to measure. There are standards everyone knows. I no longer want to do mental gymnastics to convince myself that my 50 line script was a better choice than Ansible.


> Devs shouldn't be isolated or left to just code. Bypassing TPMs, PMs, and talking directly to the customer always reveals more about the problem, in less time, and with higher accuracy

This is a super common misunderstanding for recent CS grads. School projects being well defined and fully spec-ed can make newgrads expect that's what their work ought to be. Good PMs and TPMs are super valuable, but their value doesn't come from spec-ing work or anything like that. A good engineer always talks directly to the user, and a good PM works to facilitate that


> Java isn't that terrible of a language.

I agree but is it just me that gets annoyed by class files, jars and working with the jdk. I am sure it is because I am a novice but with python I just run the script with C I build and run the executable. Java is a little weird and in between, it seems well thought out in this regard and for good reason too but imo that is the only thing that annoys me about it. .NET and C# is similar for example but I get an exe not a binary that needs an interpreter (although it does need the right runtime installed)


>Pencil and paper are the best programming tools and vastly under used

Most of these sound good but this one floored me. Beyond the occasional architecture whiteboarding discussions I dont see much value.

Ive also seen it badly used plenty too - e.g. hand drawn diagrams of a tech stack that were scanned in and put on the wiki and were out of date by the very next day.


> Trading purity in exchange for practicality is usually a good call

Definitely. I’ve seen this proven again and again.


hmm, I'm not sure I understand this. Can you give an example?


One I've flipped on:

Functional programming has some really solid ideas.


I love the Larry Wall quote: "Programming languages do not differ in what they make possible, but in what they make easy." I grew up coding Lisp, largely in an imperative style, but over time learned some of the tricks you could pull to better denote your intention to the compiler. Lisp, Erlang, OCaml, etc. let you concretize some cool concepts, but dang if they're not hard to learn.

Anyway... +1. Functional Programming has some really solid ideas. (assuming you were saying you flipped over to this viewpoint and not away from it.)


>90%, maybe 93%, of project managers, could probably disappear tomorrow to either no effect or a net gain in efficiency.

Wait, what?


I agree with this. After spending 20 years in dev and several of them as a team lead, the vast majority of the project managers I've worked added very little value. Most of my interaction with them fell into three categories:

- Asking them for project priorities and waiting while they asked management for the answer.

- Giving them project updates and then listening to them repeat those updates verbatim during the next day's meeting with management, that I was also in.

- Asking for their opinion on how something should work, just because I didn't want 100% of the blame when management didn't like a decision.

And honestly, saying they had no value-add is being generous. The majority of the examples listed above were time sinks.

Of course there's the occasional good PM, but they were very rare.


>90%, maybe 93%, of project managers, could probably disappear tomorrow to either no effect or a net gain in efficiency.

As a prior PM, I agree. We're there to help management, not the engineer. The engineer does not care, and it does not raise efficiency. I always felt bad about this, and went back to engineering.


30+ years of software development experience here and I agree 100%.


Mine is metaprogramming is more often bad than good for businesses.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: