I am certainly not a great programmer, but I'll add a concept to this list: you avoid bugs by not writing code.
Ruthlessly focus on the minimum viable amount of code for a given problem, or if a problem can be avoided entirely. Minimalism in code and software design is a beautiful thing, but it doesn't get a lot of press because "the new hotness", "grow or die", "we have to do something", and so on.
Yeah, I know a lot of programmers who subscribe to this theory. Frequently programmers I prefer not to work with, I'm afraid.
I prefer working with the programmers who write a lot of code (often with bugs) and then refactor into something minimal. The advantage is that they produce something early that we can get feedback on and we can spiral into the right solution, both architecture / feature wise.
I guess everyone is entitled to their preferences, as long as they don't try to force them into anothers.
The problem with the "code a lot, then refactor" approach is that it takes a lot of different skills to actually pull off. Some of the pitfalls I have seen are:
1. You need an unusual amount of perseverance, or the feedback of very knowledgeable, outspoken and candid coworkers to actually perform an acceptable refactor, instead of the usual brushing of dirt under a thin (leaky) abstraction. Some of my best work I have done while challenged by colleagues that did not allow me to get away with the "good enough" version I initially wanted to settle with.
2. You need to command an amount of social capital that is not normally granted to mere programmers in order to let the higher-ups that: "Yes, it seems to be working, but no, it is not production ready yet. Do I need to remind you, again, that you agreed to create a throw-away prototype from the very beginning?" In my experience, experimental code - no matter how rough - gets promoted to officialdom status the very minute it's pushed into your source control system.
3. Even if you can navigate #1 and #2 successfully, you still need a lot of forethought in order to design a robust API that will allow your coworkers to interact with your module without being disrupted by your ongoing refactoring.
3.1 As a corollary to the previous one, the effort to communicate the robust API from #3 grows super-lineraly to the size of your team. Not only the probability of someone being left out of the loop grows with N, but also the probability of some idiot^H^H^H^H^H misguided person deciding to ignore all warnings and rely on some implementation detail to accomplish some short term milestone.
4. Finally, you need to actually catch the bugs in order to justify all of the above. This is not an actual skill that need to have in order to do that, but it means that any problem that you might have gets amplified by any defect or omission you may have in your testing strategy.
> You need an unusual amount of perseverance, or the feedback of very knowledgeable, outspoken and candid coworkers to actually perform an acceptable refactor, instead of the usual brushing of dirt under a thin (leaky) abstraction.
Refactoring is the most fun part! It's lovely when things become tidy, straight, and clean.
It's horses for courses really. It all comes down to the cost of an escaped defect: Impact of failure; costs to rectify vs costs of writing bug free code. If for instance you're writing code for a vertically integrated application that you manage yourself you can fix it any time with little cost then simply "move fast and break things" makes sense. If you're shipping software where usage and maintenance is out of your control you should be a little more circumspect. If your code does something critical like manages e.g. a space shuttle [0] then there should be a significant $/£/€ tag attached to failure such that justifying an overbearing process is a no brainer.
I learned this when I dabbled in real time systems. Those dudes can crank out some hella reliable code. At first I thought it was despite the cumbersome dev processes that old school RT software uses. Now I realize it is because the process imposes a high cost on each line of code. So, there's less code. Less code = fewer bugs.
This is the one advice I give my team every SINGLE day. I get them to spend time upfront to consider if it is necessary for the code. If you don't need it, don't write it.
I work in embedded and the more code there is in there, the more chance a bug will happen. With embedded, we are not just talking about pure software bugs, we are talking about hardware bugs that is root caused to software functions. Sometimes easy to find, but also difficult to find.
1. Do not write *any* code (can't argue with that)
2. Write "good" code
3. Insert paradigm here (like TDD, etc)
Be careful with "Ruthlessly focus on the minimum viable amount of code for a given problem". In my experience it will lead to "hacks" upon hacks, until the code-base gets unmanageable.
I was hoping to find something concrete here but found mostly fluff (and some nice quotes, sure).
I've worked with some incredible engineers so I'm going to share what I've noticed and learned over the years:
1. They write code which elegant and easy to read (cyclomatic complexity is "magically" low).
2. They know their data structures, algorithms and use them when required -- that is, when there isn't a nice, clean implementation of something, they aren't afraid of writing their own version.
3. They write unit tests. They don't shy away from creating test infrastructure even if this means a lot of work on its own. The stuff built is then easy to use by future contributors.
4. They're aware of new features and frameworks.
5. They're generally friendly and are willing to explain stuff to others.
Most of the very best developers I have had the opportunity to work with had this quality. They are very comfortable in what they know and what they don't know.
Nearly every cocky programmer I have ever worked with were average at best. And you could never teach them anything they didn't "already" know.
This seems like trivial advice that has very little impact on day to day programming. "Just try to think of all the possibilities and you'll have fewer bugs."
It would be more useful to have a reference of quality techniques that have at least some data behind their claims of efficiency. I know of this list:
I think she's trying to imply that we should not jump into coding without putting in some thought on all possible scenarios. Obviously there is no way that anyone can tabulate all possible scenarios, but the time spent on dwelling on possibilities helps to avoid initial flurry of bugs after a feature is deployed.
Well you can choose to ignore what a non-technical manager says (if you don't care for him/her).
I don't know what your programming experience is, but if you've done some moderately complex projects this might sound obvious but is overlooked by a lot of people.
Source: Personal experience which lead to a lot of sleepless nights
"Just try to think of all the possibilities" is excellent advice. I got told that when I was a younger programmer. Now I live by it. I agree that it's not an easily actionable kind of advice, but probably in any field some of the best advice is not easy to understand or put into practice.
While these are three of the top software minds, and great ideas that should be incorporated into your own thoughts, I feel like these three examples people and philosophies that come from a different era.
Bob Martin's seems the most timeless approach to me, building things in small pieces and testing everything is as applicable now as it has ever been.
Knuth's documenting everything is still absolutely valid, but very few people are in a position to go months without testing, nor to offer bounties on bugs. Those things sort of require writing software that doesn't depend on other people's libraries. That used to be the norm decades ago, but it's really rare now, and most often not remotely possible.
Dijkstra's approach is now only metaphorical. The example is mathy, and if you're writing mathy code, proving things about it makes sense, but writing large async web projects with a lot of UI & network communication... by all means you want to think about how to cover all your bases, but proving things mathematically about that kind of code isn't realistic or practical, if it ever was.
but writing large async web projects with a lot of UI & network communication... by all means you want to think about how to cover all your bases, but proving things mathematically about that kind of code isn't realistic or practical, if it ever was.
When you are writing async code, it's more important than ever to try to think with the proving mindset, and at least consider in your mind why you won't get deadlocks or race conditions. It's more important in async conditions because you have no chance of finding all the deadlocks and race conditions through testing alone.
(Also, you misunderstood Dijkstra's approach. He didn't prove all his code, nor did he teach people to do that. He merely wanted them to have the mindset of trying to think of everything that can go wrong).
Upvoted because I agree with everything you said. Except the part about me misunderstanding. You don't have to claim I'm wrong to make your own point, I did read the essay. It makes me smile that you phrased it as a disagreement and said I misunderstood, but then you repeated back to me the same point I made: you want to think about how to cover all your bases.
And my larger point still stands - Dijkstra's quote is anachronistic. Having a mindset of trying to think of everything that can go wrong can no longer in today's world be done without debugging or by "not introducing bugs to start with". That used to be possible with a proving mindset, but it's not anymore. A proving mindset is still valuable, but testing and debugging is now invaluable because we rely more than ever before on software we didn't write. Nobody can reason conclusively about async software that relies on dozens of npm projects.
* edit: I just thought of a better way to state what I mean: Having a proving mindset in a modern development environment means debugging and testing. The only way to cover all your bases and prove your code in all cases these days is to run it in all cases.
I'd never trust someone who claimed they proved their code in advance and didn't need to debug it because they don't write bugs in the first place. If someone said that to me today, it would be a like saying out loud that they have the exact opposite of a proving mindset, and then I'd prove it by running the code and finding the bugs.
Hmmmmm let try to understand what you are saying better then.
Nobody can reason conclusively about async software that relies on dozens of npm projects
Indeed, as we've seen, a single simple dependency can break a lot of things, and in general it's not practical to go through all the dependencies in npm and make sure they are ok.
And my larger point still stands - Dijkstra's quote is anachronistic "not introducing bugs to start with"
Dijkstra (especially after "goto considered harmful") had a habit of making hyperbolic quotes that didn't match what he actually believed. To understand what he actually believed, and what he actually did in his own programs, it's necessary to dig deeper.
Fortunately Dijkstra wrote a lot, and he wrote a text book which he used to teach beginning programmers the 'ideal' way to program. In his textbook, it's true, he began by teaching students how to prove program correctness. However, he quickly moved on from that, at one point saying, "we gain nothing here by going through the work of proving this program correct."
In other words, he was not trying to tell everyone to program by proof, nor did he do it in his own work; rather, he wanted people to have the proving mindset (as opposed to the "hey, it works, ship it!" mindset). Dijkstra was aware of Knuth's famous quote, "Be careful about using the following code — I've only proven that it works, I haven't tested it." In the OP I tried to show that all three of these programmers have that mindset, but it manifests in strikingly different ways. There are many techniques that can be used, but the underlying principal is the same for all of them.
And I think you will agree that NPM is just crying out for a little more formality :)
> Dijkstra had a habit of making hyperbolic quotes that didn't match what he actually believed.
Then maybe we've learned that choosing to use hyperbolic quotes for blog posts can lead to unnecessary disagreement, or perhaps in our case, violent agreement? ;)
> And I think you will agree that NPM is just crying out for a little more formality :)
Sure you do and you just did. Knuth wrote a hugely successful typesetting system that is still the standard for mathematical texts. That was one of his accomplishments. I don't know about the others in the article - off the top of my head I can't think of a software system that the others wrote that I have used. But Knuth is a programmer who actually writes code.
Really? If you just take time to think then everything will go nice and smoothly and you'll always have spare capacity? Let us know how that goes for you when you get a real job.
Don't get me wrong, good theory matters. But good, clean design also matters. Clarity of thought (about the problem space as well as the theory) matters. Good tests matter. (Someone said, "It's amazing how many bugs a proven-correct program can have.") Good process matters. It all matters.
No, many great developers have their hands full because they have a number of ambitious projects. Certainly some tasks take enormous effort regardless of your technique.
It only appears bug-free, because nobody has the time, energy, and patience to read djb's awful code. He's much better cryptographer than programmer, and he should stick with that.
I think the common denominator is that they all have a clear expectation how their code should behave. In my work I see a lot of situations where something goes wrong and the developer doesn't know exactly what should really have happened. Somehow it worked and that's good enough.
Exactly. The hallmark of any mediocre programmer and above is simple : they know the field they're writing software for. Usually pretty well.
Most programmers I know simply demand the requirements are "clear". Unfortunately that often doesn't work, as it would require explaining half a year's worth of economic theory and then another half a year detailing a client firm's experiences selling X/manufacturing Y/...
There's just no working with that. You cannot think of all the possibilities because you cannot attach reasonable chances of actually happening to anything.
When starting a mission I always make a point of spending at least a few days manually solving the problem the software I write will solve. Because doing so lets me know what I know and don't know, and actually doing so informs me of both shortcuts used in practice and political problems/attitude towards automation.
Without reasonably correctly assessing all of this a software design needs to get very lucky to work at all.
Tackle simple, well defined projects. Avoid anything like UI where it's hard to predict how the end user will stretch the code or how it might be hacked.
Maybe they don't have one. 'Cause you know there's people out there who would pick every line of code apart looking for a bug just to prove a point and be rather nasty about it.
I think the conclusion is wrong. I think it's not a mathematical mindset per se, it's just that all three insist on objective evidence that their code does what they intend. That isn't "mathematical", especially for Bob Martin.
Ruthlessly focus on the minimum viable amount of code for a given problem, or if a problem can be avoided entirely. Minimalism in code and software design is a beautiful thing, but it doesn't get a lot of press because "the new hotness", "grow or die", "we have to do something", and so on.