That’s a good point. However, I suspect that a lot of people are not working on a scale that would qualify as “large”. I am just working on a project where the infrastructure guys planned for massive scaling but so far the highest we ever got was 3 pods and we easily could have handled this with one server. I think I would wait for logging to become a problem before reducing it. If that ever happens.
How does making these DEBUG logs into INFO logs make the volume manageable?
Or, to flip that around, if you take a program that produces a manageable amount of INFO logs, and change some of those INFOs to DEBUGs, how does that suddenly become unmanageable?
My point is that DEBUG level logging is (hopefully!) not on by default, and that this is what it makes the production log volume manageable.
My experience has been that 1 customer-facing byte tends to generate something like ~10 DEBUG-level telemetry bytes. This level of request amplification can't be feasibly sustained at nontrivial request volumes, your logging infrastructure would dwarf your production infrastructure.
You continue to assert that error handling adds noise to a block of code. This is an opinion, that's fine, but it's not an objective truth. In many many domains -- in fact the domains which Go targets -- error handling is equally important to business logic.
They lost close to half of their engineering team, significant percentages of their operations teams, and many senior leaders.