I went to an Agile class where the lecturer compared unit tests to double-entry bookkeeping. An accountant doesn't say "oh, I don't need to add up both columns here, I know it's just trivial addition".
Once I got in the habit of writing tests of even the most simple transformations, the code complexity and my test complexity grew at the same rate, so it's much harder to end up with a giant untestable mass.
I once spent over a month tracking down a bug (in a different project than the one I mentioned above) that I have a hard time seeing how unit testing would have caught. The program: a simple process (no threads, no multiprocessing) that would, depending on which system it ran, would crash with a seg fault. The resulting core files were useless as each crash was in a different ___location.
It turned out I was calling a non-re-entrant function (indirectly) in a signal handler (so technically it was multithreaded) and the crash really depended on one function being interrupted at just the right ___location by the right signal. That's why it took a month of staring at the code before I found the issue. Individually, every function worked exactly as designed. Together, they all worked except in one odd-ball edge case that varied from system to system (on my development system, the program could run for days before a crash; on the production system it would crash after a few hours). The fix was straightforward once the bug was found, but finding it was a pain.
So please, I would love to know how unit tests would have helped find that bug. Yes, it is possible to write code to hopefully trigger that situation (run the process---run another process that continuously sends signals the program handles) but how long do I run the test for? How do I know it passed?
no, unit testing doesn't tell you if your constructs aren't safely composable. So: it will pretty much never find a threading bug, a concurrency bug, a reentrancy bug, etc.
I only know three ways to detect this sort of bug, and they all suck: 1) get smart people to stare at all of your code 2) brute force as many combinations as possible 3) move the problem into the type system of your language so you can do static analysis of the code
I don't think even the most hardcore TDD zealots would come anywhere close to claiming that testing is a silver bullet. There will always be cases where you didn't think of a particular edge case, or when some environment-based issue makes covering something in a test impossible. That doesn't negate it's benefits in preventing the 99% percent of bugs that aren't an insanely rare edge case.
I don't think you should expect every bug to be caught by unit testing. But where it helps with a problem like that is eliminating a lot of other possible causes of bugs. Debugging something like this is often a needle-in-a-haystack problem, but it's nice if you can rule out most of the hay from the beginning.
In this case, once I discovered the cause of the bug I would have written a unit test that exposed it, probably a very focused one. Then I would have gone hunting for other missed opportunities to test for this, and I imagine my team would have come up with some sort of general rule for testing signal handlers.
Once I got in the habit of writing tests of even the most simple transformations, the code complexity and my test complexity grew at the same rate, so it's much harder to end up with a giant untestable mass.