Hacker News new | past | comments | ask | show | jobs | submit login

I'm surprised no benchmarks were done with logging turned on.

I get wanting to isolate things but this is the problem with micro benchmarks, it doesn't test "real world" usage patterns. Chances are your real production server will be logging to at least syslog so logging performance is worth looking into.

If one of them can write logs with 500 microseconds added to reach request but the other takes 5 milliseconds that could be a huge difference in the end.




Caddy's logger (uber/zap) is zero-allocation. We've found that the writing of the logs is often much slower, e.g. printing to the terminal or writing to a file. And that's a system problem more than a Caddy one. But the actual emission of logs is quite fast last time I checked!


I think your statement is exactly why logging should have been turned on, at least for 1 of the benchmarks. If it's a system problem then it's a problem that both tools need to deal with.

If one of them can do 100,000 requests per second but the other can do 80,000 requests per second but you're both capped at 30,000 requests per second because of system level limitations then you could make a strong case that both products perform equally in the end.


This is - along with some reverse proxy settings tweaks - one of the variables I'd be keen to test in the future, since it's probably _the_ most common delta against my tests versus real-world applications.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: