Hacker News new | past | comments | ask | show | jobs | submit login
Writing code that works on first go (chestergrant.posterous.com)
23 points by chegra84 on July 10, 2011 | hide | past | favorite | 24 comments



I instinctively distrust code that works first time. I advocate error-driven development.


That's basically synonymous to generic test driven development, unless you have some magical way of finding errors in code without running some sort of test on it? (Which includes tests like static analysis, valgrind, etc.)

The author's common mistakes scare me for being common. Not that I consider myself a great coder, but come on...

On a personal note, code I write in C++ (especially C++, which I've done a lot of at school) and frequently code I write in Java (which I do a lot of in my day-job) rarely 'works' the first time (though 'work' in this sense is more 'compiles'; runtime is usually pretty good except in C++ with its segfaults that require whipping out gdb and hoping it's not a template error). On the other hand, if I'm using a language I more naturally think in like Python, it's rare that stuff doesn't Just Work at runtime. And when it doesn't work, it's usually because of something I hadn't thought of in the design (and wouldn't even be in any hypothetical pseudo-code written beforehand), and easy to fix. Ahh Python...


That reminds me of an old adage from a former colleague back at H.P. He said start with the null program, and debug it until it does what you want. :)

I do concur with the author's point (3), where he recommends starting with the high-level syntactic forms and fill them out from the top down.

That has more uses than merely preventing syntax errors. I also use it for case-driven development. Often I'll write a routine with this sort of inner dialogue: OK, there are only two cases here. Let me write an "if" for that. Now I'm not quite sure what to do in case 2, so let me focus on the easier case 1. Now here I need to do a three-way comparison of these two strings. OK, the less-than and greater-than cases are easy -- I'll go ahead and do those. The equals case is a little harder. Tell ya what, I'll get to that one later. Meantime, let me go back and address the top-level case 2, because I think I have a bead on that one now.

That sort of thing. And of course, as I go, I develop a test input for each case I've developed.

Also, in those postponed cases, I like to insert a "die" or "abort". That way when I develop a new test input for it, I'll know I'm hitting it because the program will die. Then I can punch into that case and make it handle my new test scenario. I know when I'm done because the code doesn't say "die" anymore -- except when I really do want to assert the impossibility of a certain condition.


The author may unconsciously agree with you: "You know, the rare occasions where everything works on first go and you are pretty sure you made a mistake." [so goes the first line of this blog] Writing blogs without mistakes is hard too...


1. an ide/language with really good static analysis 2. aggressively minimize codebase complexity

done. when my code doesn't work the first time, and i'm writing in a static-typed language, it's usually due to high accidental complexity in my code or code i interface, requiring me to keep too much context in my head, causing logical problems and problem states to be non-obvious.

stupid errors like writing an if statement wrong, is sophomoric. "most common errors" really means avoiding mutable state & leaky abstractions[1]. i do think the OP gets this, it just doesn't come across too clearly.

[1] nostrademons of HN, "how to become an expert swegr" http://news.ycombinator.com/item?id=185153


I'd say the devil is in the "aggressively minimize codebase complexity" detail

If you can give reproducible instructions about how to do that, I'd say to hear them.


agreed, but that's the point: "use 1==foo and keep methods small" is totally missing the point.

I'm not sure I can state it more clearly than "understand accidental complexity, aggressively reduce it".


Should generate some debates, depending on if you're from the static/dynamic typing, IDE vs. wetware, TDD or not, etc camps.

3. If you use IDE's or vim/textmate/emacs and a decent language plugin, you'll argue the IDE should match braces and insert skeletal control structures.

5. Self-contained function, so no side effects, isn't that part of the definition of functional purity in them "academic languages" (heh)?

6. How do you define edge and corner cases? Is there something like QuickCheck for your dev environment?


I think it would be great if there was a way to easily capture my "most common error categories".

Years ago I got bit a couple of times by assignment instead of comparison errors (i.e. if(x = 1) instead of if(x == 1))

I noticed a pattern and made a conscious to always put the variable last in comparisons, so for years I have written if(1==x) so that the compile would fail if I wrote if(1=x).

Having a way to see my most common errors would give the opportunity to look for similar error-proofing measures.


I have seen this practice refered to as "Yoda conditionals" but never knew the reason for it. Although it reads wrongly (to me), I can now see the logic.

A lot of the code I write is JavaScript in Aptana, which has a real-time JSLint checker that warns against assignments in conditionals.


I have never heard it called "Yoda conditionals" before. That is awesome.


I always compile C code this way, so it warns me about that particular type of error:

  gcc -c -Wall -Werror -ansi ...
You may not have to use all of those options, I'm not sure. The error message is:

  "error: suggest parentheses around assignment used as truth value"
I find that very helpful.


You forgot "-pedantic" and "-Wextra". =P (And possibly a few more one-offs that aren't contained in the groups.) Though the key flag there to make it warn is -Wall, so you can still compile with c99 and avoid the -ansi flag.

Pretty much any compiler or language worth its salt that allows assignment in conditionals with the equals sign (so Python, Lisp are out) will warn you about it.


That is quite interesting.

So, have you stopped using assignments instead of comparison operators, or does the compiler just prevent you from writing a logic error?


I'm far more interested in code that works when it ships than code that works the first time it's run.

Code that works the first time it's run is magical, and therein lies the problem: Magic code is untested and unproven code. It offers a false sense of security, which is why it fills experienced developers with a sense of dread.

Until you prove that code's correctness (such as with tests), it is almost certain that it has nasty bugs in it that your initial successful run haven't uncovered. So what does having it work the first time give you? Bragging rights, maybe, but nothing of any real value to shipping code.


Beg to differ. Code that "works" the first time it is run is no different from code that "works" after several debugging cycles. You still have to prove (or test, in a pinch) that it works in either case.


My point is that code that "works" the first time is at best a false optimization.

It takes more time to prove the code than it does to write the code, and it generally takes more time to design the code than it takes to prove it.

Syntax errors are caught by the compiler, and are very quick to fix. Semantic errors are usually caught by compiler warnings (provided you turn them on) and by static analyzers, and are also quick and easy to fix. Anything more insidious will not be uncovered by code that "works" the first time it's run, so you really haven't saved yourself anything.


The point of the article, as I see it, is that you should strive for clear and correct writing, from the outset. Claiming that this goal is somehow detrimental to code quality strikes me as absurd.

Fixing syntax errors will do nothing for the logic errors. I find it more plausible that fixing syntax errors coming from sloppy writing will rather cost you time and energy that would be better spent on making the logic right.


Don't you mean at worst a false optimization?


jeez. writing code that works the first time is a productivity hack. it doesn't really have anything to do with preventing logic bugs so much as it has to do with tightening your feedback loop.


My mistake for assuming that HN didn't follow the Reddit pattern of downvoting anything that disagrees with the hivemind...


Non-IDE Python programmer comments, please?


I mainly use vim. I have pyflakes run occasionally and on save, and some commands for running unit tests.


I also use vim, and enjoy this plugin: http://www.vim.org/scripts/script.php?script_id=850




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: