Beg to differ. Code that "works" the first time it is run is no different from code that "works" after several debugging cycles. You still have to prove (or test, in a pinch) that it works in either case.
My point is that code that "works" the first time is at best a false optimization.
It takes more time to prove the code than it does to write the code, and it generally takes more time to design the code than it takes to prove it.
Syntax errors are caught by the compiler, and are very quick to fix. Semantic errors are usually caught by compiler warnings (provided you turn them on) and by static analyzers, and are also quick and easy to fix. Anything more insidious will not be uncovered by code that "works" the first time it's run, so you really haven't saved yourself anything.
The point of the article, as I see it, is that you should strive for clear and correct writing, from the outset. Claiming that this goal is somehow detrimental to code quality strikes me as absurd.
Fixing syntax errors will do nothing for the logic errors. I find it more plausible that fixing syntax errors coming from sloppy writing will rather cost you time and energy that would be better spent on making the logic right.