Shouldn't the types of mistakes you're worried about be caught by the automated tests? IMO code review is more about bigger picture design, e.g. if a piece of code would yield the correct result but with sub-optimal performance (which can also be caught by automated tests but not always).
Prior work in CompSci on various engineering techniques showed code review to be among most effective at any part of lifecycle. Testing can miss lots of things. So, best to do both.
There's a lot that fits between automated tests and code review. We have an extensive system of static analysis and code quality bots that run, but there's still a lot of design patterning and higher level functionality that machines (or, at least, our machines) don't always catch.
Obviously, it depends widely on the codebase and the number/quality of engineers working on code, but it's been my experience that a team reviewing each other's code is still something that can't be 100% replaced with automated tests.