Hacker News new | past | comments | ask | show | jobs | submit | sagichmal's comments login

> A handful of people left

They lost close to half of their engineering team, significant percentages of their operations teams, and many senior leaders.


What? Only 5% of the company took the buyout: https://www.coindesk.com/business/2020/10/08/5-of-coinbase-e...


Maybe you're mixing this up with Basecamp?


Yes. All of the "private" correspondence he's received, which of course he cannot share, has been overwhelmingly positive. Indeed.


Only at trivial request volume. It’s easy to overwhelm any logging system without discipline.


Nah, Splunk can handle anything you throw at it.


I am currently at an organisation where Splunk is a bottleneck.


No, you are currently at an organization that hasn't allocated enough resources to Splunk.


Hah! OK, sure.


Log after is SOP in any reasonably large architecture. Too much logging is a problem you encounter very quickly with nontrivial traffic.


That’s a good point. However, I suspect that a lot of people are not working on a scale that would qualify as “large”. I am just working on a project where the infrastructure guys planned for massive scaling but so far the highest we ever got was 3 pods and we easily could have handled this with one server. I think I would wait for logging to become a problem before reducing it. If that ever happens.


Logging is significantly cheaper than tracing to maintain in a usable state.


You only emit a single log event (line) per request. Problem solved.


Yes, it’s useful, but that occasional usefulness is substantially outweighed by it’s perpetual cost.


Nope, this produces unmanageable log volume.


How does making these DEBUG logs into INFO logs make the volume manageable?

Or, to flip that around, if you take a program that produces a manageable amount of INFO logs, and change some of those INFOs to DEBUGs, how does that suddenly become unmanageable?


My point is that DEBUG level logging is (hopefully!) not on by default, and that this is what it makes the production log volume manageable.

My experience has been that 1 customer-facing byte tends to generate something like ~10 DEBUG-level telemetry bytes. This level of request amplification can't be feasibly sustained at nontrivial request volumes, your logging infrastructure would dwarf your production infrastructure.


You continue to assert that error handling adds noise to a block of code. This is an opinion, that's fine, but it's not an objective truth. In many many domains -- in fact the domains which Go targets -- error handling is equally important to business logic.


That is not at all what those links indicate.


Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: