Over the years I've worked with many developers who don't write ANY unit tests, relying only on integration tests and this has caused severe bugs that could have easily been caught by unit tests. This has cost the companies they're working for a fair bit of money.
I've called this developers out and they often seem to be anti-unit test or against unit tests as they think it slows them down, when in reality the cost of cleaning up afterwards costs more.
When you start writing code with unit tests in mind, you generally follow best practices, and start to realise if a "unit" is too big and needs to be split up into smaller units. That and mocking I've found that the anti-unit test developers I've worked with commonly aren't keen on mocking stuff out either (but again, that's all anecdotally).
Generally I find Fowler's guidance on the test pyramid is always worth considering.
As one of those developers, the problem I face is learning how and why to write tests. Most of the tutorials I've read on the matter test things that are too trivial and test too often. It also doesn't help that the people I've worked with don't have a clue how to do it either or those that do it think that everybody should just get it and can't be bothered to explain it.
I recommend reading The Art of Unit Testing -> https://www.manning.com/books/the-art-of-unit-testing . At least this is the book I learned TDD from more than 10 years ago and the knowledge I gained from the book has proven to be timeless.
I see this problem a lot. TDD is a hard skill to learn. It took me 2 years of continuous practice to really get it, and by year 3 I was only just starting to get decent at it. People who try and do TDD for less than 6 months haven't even left the parking lot yet.
I try and short-cut that steep learning curve by mentoring other developers, but there isn't enough people who have that experience around. I've met lots of programmers, and less than 10% of them have ever even tried TDD, let alone done it enough to gain insight and mentor others.
I completely agree, I feel like I'm in the same boat. I do feel like I'm slowly making progress though - primarily by making small contributions to open source libraries that have tests, which requires updating and/or writing new tests - and then trying to replicate tests like that in my own small libraries.
It could also be due to the way the company incentivizes developers. Say a new system or tool needs to be shipped this quarter. They could incentivize that being delivered by offering bonuses to the team if they ship it on time. But if the bug is a minor issue, or will only happen 6 months after ship date (performance issue, leap year issue, etc), developers may elect to ship it as "broken" to meet their deadline. Then another "maintenance team" will be responsible for fixing the bug 6-12 months later when it surfaces in production.
Plus, if no one ever gets fired for shipping buggy code, why bother working so hard on bug-free code? It's a tradeoff.
I've called this developers out and they often seem to be anti-unit test or against unit tests as they think it slows them down, when in reality the cost of cleaning up afterwards costs more.
When you start writing code with unit tests in mind, you generally follow best practices, and start to realise if a "unit" is too big and needs to be split up into smaller units. That and mocking I've found that the anti-unit test developers I've worked with commonly aren't keen on mocking stuff out either (but again, that's all anecdotally).
Generally I find Fowler's guidance on the test pyramid is always worth considering.
https://martinfowler.com/articles/practical-test-pyramid.htm...