In my experience, if you have a medium sized task with multiple unknowns, it is best to prototype aggressively without a thought about quality, and then start a second iteration with quality in mind. The purpose of the prototyping is learning.
It’s faster (yes) than prototype-then-fixup. Why? Because the ”live refactor” is harder than the greenfield writing phase. The new knowledge often makes the impl straightforward.
It’s also better quality than design-then-build. The optimal architecture and modularization change with knowledge increase, which is best to get via experience. You can design fully upfront but it’s riddled with analysis paralysis - it’s notoriously hard (and slow) to predict unknowns.
Sounds like good advice? Well, the hardest part isn’t to follow it – it’s to know upfront what size of task it is. If it turns out to be easier, you waste a bit of work (prototype-fixup is faster). However, if it’s bigger than you thought – you’re in the best possible position to break down the new problem into subtasks, with no wasted work.
I believe that if you truly accept what Hemingway said, that writing is rewriting, you get less attached to the idea of reaching the best design on the first try, and feel better when starting with a suboptimal solution.
Of course this sometimes conflicts with organizational pressures, where that quick and dirty solution may be deemed as enough by some and you won't get to finish with the proper design. For me the trick is to consider first version just an internal stage of work on a feature, not even communicated outwards most of the times, until the appropriate design is reached.
If you could package this up in a motivational poster, it belongs in every company meeting room. Speed and quality are not two opposing forces to tradeoff. We can have both.
But we need to get rid of this silly, infantile, unwavering attachment to our source code files. Throw code away. All. the. time. The first version of code is, by definition, being built in the absence of critical information. Why on earth would we get so attached to that which was built in ignorance? In this case we're not "reusing code", we're throwing away knowledge!
Why would you discard everything valuable you learned in favor of a code artifact written before you learned it? Throw away the code instead! Surely the code written AFTER gaining the knoweldge will be both faster and better quality. (and more clear, less tech debt, etc)
We need better words for the different code written for different purposes.
Code written to learn and explore a problem space? Sure.
Code written in response to a prompt, which could easily be rewritten - things like a throwaway “please tell me a story about the contents of this CSV for me and also write code to graph it”. Yep throw it away.
Or keep it as an example for a later model.
That’s very different to code written to high standards intended for others’ use.
We need different words for all of those 3 varieties of code.
A quick grep every blue moon can be faster than wrangling a LLM into place, and as an added bonus you can look back and laugh at how big of an idiot you were.
I don't think the author is necessarily advocating the throwing away of code here, they're advocating the value of being able to rapidly prototype and move on from seemingly incomplete things.
The whole value proposition of the digital world is that we can store and manipulate it for virtually nothing: there isn't the same cost to having digital stuff, and so there isn't the same gains from throwing it away IMO.
Ive got a million messy files saved up, honestly, even when I know just letting go could help me think clearer. Ever wonder if holding onto old stuff slows you down or actually helps you get smarter over time?
There’s something beautiful about not being riddled with previous artifacts and starting clean with how you imagine you want to build your system. If the system is large enough, you can’t do it that often.
It’s faster (yes) than prototype-then-fixup. Why? Because the ”live refactor” is harder than the greenfield writing phase. The new knowledge often makes the impl straightforward.
It’s also better quality than design-then-build. The optimal architecture and modularization change with knowledge increase, which is best to get via experience. You can design fully upfront but it’s riddled with analysis paralysis - it’s notoriously hard (and slow) to predict unknowns.
Sounds like good advice? Well, the hardest part isn’t to follow it – it’s to know upfront what size of task it is. If it turns out to be easier, you waste a bit of work (prototype-fixup is faster). However, if it’s bigger than you thought – you’re in the best possible position to break down the new problem into subtasks, with no wasted work.
reply