> The company I work for see shipping potentially buggy code and fixing it as bugs are reported as an acceptable development practice.
That's what I've heard called "trading external quality" for time to market. That is exactly how you're supposed to do it, at least according to the dev coach that I've been listening to (@GeePawHill)
Why you write tests and refactor is to keep your code's internal quality up. Those are the things that customers can't see, the things that make it harder for you to change your code when you come back to do more work later.
The Correlation Principal states that ISQ and developer productivity are directly correlated and that you cannot trade internal quality for development time to market without this trade-off quickly coming back to bite you, in the form of lost productivity.
A feature that is only partially implemented, or an edge case that isn't tested properly and causes something to fail at runtime, are both examples of external quality. You can indeed come back and fix that later, and you can make this trade cautiously to get your work to production faster.
But if you write a long method for example that "just works" and ship it without further introspection, refactoring, or complete test coverage, you may never recover from that. It might require devs to spend an extra hour or more each time that method should need to change in the future. In some future state it has become so long and complicated that nobody can fully understand it, the time to stop digging is whenever you see yourself inside of this hole.
And according to Sandi Metz, those big long methods are almost always guaranteed to be the ones that will need to change again, they are long and complicated because those are the business objects that you care about. They should be well factored as early as possible to facilitate future changes, (unless you have a crystal ball and can say for sure that won't ever need to happen!)
That seems to be a very simplified model of how actual bugs look like in the wild.
> You can indeed come back and fix that later, and you can make this trade cautiously to get your work to production faster.
I think that is only true in certain companies of a certain size. If you're a small startup trying to launch an MVP, yes, time to market will be prioritised over everything.
If you're in a regulated sector, or your company is any larger than e.g. 30 people, "going back and fixing later" gets harder and costly than shipping correct software in the first place; either because you'll damage your reputation, you'll have a deadline to implement the fix (unnecessarily adding pressure on your own team), or other teams start relying on the wrong behaviour and now your fix is not trivial anymore because it doesn't affect only a local scope.
I want to highlight this because it's a trend to think any company and any project can (and should!) be managed as if anything being done is a prototype, w/ a perception that this is "agile", and picking the worst set of trade-offs for the task at hand.
I've just been learning about Wardley Mapping and he says you're right, too. The point wasn't only that you CAN trade external quality for time to ship, but that anyone who tells you that you can trade Internal Quality for the same is probably wrong and should be disbelieved, as that trade isn't likely to work for long, if at all.
I took the $25 class that you can find easily, I thought it was very worthwhile! Helped me to understand exactly what you said, but with some formal structure. Some teams are mad scientists, some are pioneers, some are settlers, and some are town planners.
Some tasks are suited better for mad scientists, ... some for town planners. Some regulated environments are only suited to the type of work that town planners can do (6-sigma people, that's how it's also explained in the class.)
But the settlers won't have much to do if pioneers haven't done some heavy lifting first, and so on (thanks a lot, mad scientist,) and the town planners can't do the type of work that you need from them either, unless there have been many layers of groundwork laid before them.
Factoring a long and complex method doesn't magically make the process it's doing less lengthy (unless there is a lot of repetition) or less complex. In fact it's likely to make it longer and more spread out so harder to understand from scratch. When I encounter such code it's far easier to change if you keep it together and document it well so future people can find out what it's doing and see the whole in context.
For future changes, factoring also requires a crystal ball since you have no idea how requirements will change or if your factoring will actually be useful for them. Better to keep things simple and contained until you actually need to make them more complex.
I agree! When the method is too hard to unit test (for an experienced unit tester) it's frequently a sign that it's not single responsibility anymore, or that there may be other problems in the design worth sussing out. But there is no substitute for experience and a sharp eye, when you know what you're doing and have confidence about what is most likely to come next, it's almost like having that crystal ball. Rules are also made to be broken.
This summarizes my experience to a tee. Early on I've found that little compromises on internal quality come back and bite you almost right away. Compromising on that only makes sense if you're 'building one to throw away', in which case rapidity of learning is probably the priority. The problem is that the decision makers almost never want to actually throw it away when the time comes.
I've been doing TDD a long time (18+ years), and I'm definitely pragmatic in the sense that I don't test everything, definitely don't aim for 100% coverage, and don't use mocks often, etc. I aim to extract most of the value from the tests without paying an absurdly high tax on them. Still learning in that regard, but I have had some real successes in testing.
I find this is key in our broader team's ability to keep the internal quality high. It's obviously easy to mess up a code base, but (high quality!) tests have been an important part of that, in addition to refactoring, code review, etc.
That's what I've heard called "trading external quality" for time to market. That is exactly how you're supposed to do it, at least according to the dev coach that I've been listening to (@GeePawHill)
Why you write tests and refactor is to keep your code's internal quality up. Those are the things that customers can't see, the things that make it harder for you to change your code when you come back to do more work later.
The Correlation Principal states that ISQ and developer productivity are directly correlated and that you cannot trade internal quality for development time to market without this trade-off quickly coming back to bite you, in the form of lost productivity.
A feature that is only partially implemented, or an edge case that isn't tested properly and causes something to fail at runtime, are both examples of external quality. You can indeed come back and fix that later, and you can make this trade cautiously to get your work to production faster.
But if you write a long method for example that "just works" and ship it without further introspection, refactoring, or complete test coverage, you may never recover from that. It might require devs to spend an extra hour or more each time that method should need to change in the future. In some future state it has become so long and complicated that nobody can fully understand it, the time to stop digging is whenever you see yourself inside of this hole.
And according to Sandi Metz, those big long methods are almost always guaranteed to be the ones that will need to change again, they are long and complicated because those are the business objects that you care about. They should be well factored as early as possible to facilitate future changes, (unless you have a crystal ball and can say for sure that won't ever need to happen!)