Yeah, I never understood this safeguarding against bugs resurfacing after being fixed once. I only saw bugs coming back at a company that didn't use version control and instead copied source code back and forth with a USB stick.
I can understand something like test driven bug fixing, where you basically create a simple test to reproduce the bug quickly and then fix the bug using that. In many cases that is the most efficient workflow.
The test succeeding can then serve as evidence of the bugfix (though it might not be enough). So if you have already written the test, you might as well leave it in there, because it doesn't bother anyone usually, and the tiny chance that someone breaks this exact same thing again, while tiny isn't non-existant.
But fixing a bug and then putting extra work just for a test, if there is another easier way to prove that the bug is fixed? No, thanks.
> Yeah, I never understood this safeguarding against bugs resurfacing after being fixed once.
In my experience it's not infrequent for bugs to unknowingly only get half-fixed, not realizing that the true problem actually lies a level deeper, or has a mirror case, or whatever. Maybe a good example is that a parameter to a command is 0, the bugfix sets it to be 1, but a later bugfix changes it back to 0, when the correct bugfix would set it to be 0 in some cases and 1 in others.
And that if you fix the bug without a test, then the second related bug crops up a couple months later, and somebody else tries to fix it similarly naively, and can wind up re-introducing the first bug if there isn't a test for it.
Basically, in practice bugs have this nasty habit of clustering and affecting each other -- if the code was trickier than usual to write in the first place, it's going to be trickier than usual to fix, and more likely than usual to continue to have problems.
So keeping tests for fixed bugs is kind of like applying extra armor where your tank gets hit -- statistically, it's going to pay off.
I have a perfect real-world example. About four years ago some of my code broke in certain cases. I came up with a fix that relied on a case-sensitive regex to check for those cases. I think I made it case-sensitive because I wanted to make sure it didn't trigger accidentally on something added in the future. And these case names had never changed, right?
Yep, now that I've spelled it out, what happened is obvious. Three years later, I got ordered to change one letter in these case names from lower case to upper case. Of course I didn't remember that I'd used a case-sensitive test against the names three years before. And bam, the bug was back, and as there was no test for it, I shipped code with the bug.
The good news is the bug was obvious as soon as the customers tried to compile my code, so it didn't cause any harm but embarrassment on my part. Even so, it took me a while to track down what was going on. Imagine my shock when I got into the code and found the fix I thought I needed to make was already there... but itself needed to be fixed!
I can understand something like test driven bug fixing, where you basically create a simple test to reproduce the bug quickly and then fix the bug using that. In many cases that is the most efficient workflow.
The test succeeding can then serve as evidence of the bugfix (though it might not be enough). So if you have already written the test, you might as well leave it in there, because it doesn't bother anyone usually, and the tiny chance that someone breaks this exact same thing again, while tiny isn't non-existant.
But fixing a bug and then putting extra work just for a test, if there is another easier way to prove that the bug is fixed? No, thanks.