Your point is valid, and some of the dialog in the replies to your comment is also valid. So, I'm just responding to the root of the dialog. What architectures are you working with that suggest higher integration test strategies?
I'd suggest that the balance between Unit Test(s) and Integration Test(s) is a trade-off and depends on the architecture/shape of the System Under Test.
Example: I agree with your assertion that I can get "90%+ coverage" of Units at an integration test layer. However, the underlying system would suggest if I would guide my teams to follow this pattern. In my current stack, the number of faulty service boundaries means that, while an integration test will provide good coverage, the overhead of debugging the root cause of an integration failure creates a significant burden. So, I recommend more unit testing, as the failing behaviors can be identified directly.
And, if I were working at a company with better underlying architecture and service boundaries, I'd be pointing them toward a higher rate of integration testing.
So, re: Kent Dodds "we write tests for confidence and understanding." What layer we write tests at for confidence and understanding really depends on the underlying architectures.
I wouldn't count the coverage of integration tests with the same weight as coverage from unit tests.
Unit tests often cover the same line multiple times meaningfully, as it's much easier to exhaust corner case inputs of a single unit in isolation than in an integration test.
Think about a line that does a regex match. You can get 100% line coverage on that line with a single happy path test, or 100% branch coverage with two tests. You probably want to test a regex with a few more cases than that. It can be straightforward from a unit test, but near impossible from an integration test.
Also integration tests inherently exercise a lot of code, then only assert on a few high level results. This also inflates coverage compared to unit tests.
I work for a huge corp, but the startup rules still apply in most situations because we're doing internal stuff where velocity really matters, and the number of small moving parts make unit tests not very useful.
Unfortunately, integration testing is painful and hardly done here because they keep inventing new bad frameworks for it, sticking more reasonable approaches behind red tape, or raising the bar for unit test coverage. If there were director-level visibility for integration test coverage, would be very different.
I'd also include the status of the company. What a startup needs from tests is very different from what an enterprise company needs. If you're searching for product market fit, you need to be able to change things quickly. If you're trying to support a widely used service, you need better test coverage.
I'd suggest that the balance between Unit Test(s) and Integration Test(s) is a trade-off and depends on the architecture/shape of the System Under Test.
Example: I agree with your assertion that I can get "90%+ coverage" of Units at an integration test layer. However, the underlying system would suggest if I would guide my teams to follow this pattern. In my current stack, the number of faulty service boundaries means that, while an integration test will provide good coverage, the overhead of debugging the root cause of an integration failure creates a significant burden. So, I recommend more unit testing, as the failing behaviors can be identified directly.
And, if I were working at a company with better underlying architecture and service boundaries, I'd be pointing them toward a higher rate of integration testing.
So, re: Kent Dodds "we write tests for confidence and understanding." What layer we write tests at for confidence and understanding really depends on the underlying architectures.