Wikipedia (https://en.wikipedia.org/wiki/Code_review) says: "Empirical studies provide evidence that up to 75% of code review defects affect software evolvability/maintainability rather than functionality, suggesting that code reviews are an excellent tool for software companies with long product or system life cycles. This also means that less than 15% of the issues discussed in code reviews are related to bugs.
> When reviewers look for these logic issues, they often run through the code line-by-line using different inputs and see if any lines cause the code to produce the wrong output.
I don't know of anyone that regularly does this during code reviews.
In my experience, automated tests help to catch regressions, i.e., they help catch error cases that people have already anticipated. If the system fails in some brand new unexpected way, you won't have tests for it by definition.
Similarly, static analysis can help catch certain classes of bugs, but there's plenty of things they won't be able to spot.
Yes, they're both useful, but neither of these is a replacement for code review. They're all complementary.
I review the tests as much (or more) as the code. If the tests are set up to test the right business rules, then I can be reasonably sure that the code will behave as expected.
I just did a code review today where I didn’t find any bug but I forced the “developer” to rewrite the whole thing. He didn’t write a single line of code with a minimum of maintainability in mind…
Tests find bugs, code reviews look for quality. Sometimes I can spot a bug but it’s not my primary goal.
> Automated unit and integration tests are far better at finding logical bugs in code than human reviewers.
My friend, who do you think writes the tests, and how do you think they get into the system?
Often we're writing new functionality. There are no tests for it, so they're part of the pull request, and must be reviewed to make sure they're actually testing the right thing!
I worked at a place where we wrote the test before the code that implemented the functionality. We wanted to see the test fail, so that we knew it actually tested what we thought it did. Then we wrote the code that would make it pass, and verified that the test now passed.
I'd like to see the difference in bug detection between "pull requests" vs. "over-the-shoulder code reviews".
Edit: or, rather, in my experience the author of the code often finds the bugs when explaining it to the person standing next to them. This goes away in the GitHub-style pull request.
I've always thought the purpose of code review was to determine if the implementation was consistent from an architectural perspective, bug finding seems like something that should be caught by unit tests and other tools (unless you happen to catch it by chance).
I didn't think you need to find that many bugs to make code reviews useful. Even if you only find a handful of bugs it is probably with it.
Keep in mind the longer the code exists the more expensive it is to fix it. Finding a bug early can save a lot of time and money down they line.
Also as someone else pointed out they can help with code quality. So they can help prevent someone adding a bug later.
1. the submitted title does not match the article, and the article does not support the claim of the title. (Its thesis is closer to "code reviews are tedious and a bottleneck".)
It seems like I incorrectly presumed this was well known. I will review your code, but it's your responsibility to figure out bugs. Sometimes I spot them though!
So, "don't usually" equals 15%. :^)
Deep dive: https://www.michaelagreiler.com/wp-content/uploads/2019/02/C...