As you wrote, they are new, and weren't available when the decision was made for the Go GC.
> The reason Go was able to achieve low latency with a relatively simple design is because 1. it suffers a hit to throughput and 2. that throughput hit, while significant, is not catastrophic because Go relies heavily on value types.
And are you saying these "new low-latency Java GCs" have no tradeoffs either?
I'm sorry, but we are discussing the keynote of the International Symposium on Memory Management, which is a recognized event in the field, and you are claiming things without any substantial material to offer. Maybe you're right, but I need more than vague assertions to be convinced :-)
There is always a tradeoff in throughput (and a commercial low-latency GC has been available for Java for years, as well as realtime GCs). All I'm saying is that the reason Go is able to achieve low latency with a relatively simple design is because the language is designed to generate less garbage, so the challenge is smaller. My point is that Go's GC is not some extraordinary breakthrough in GC design (not that Hudson hadn't made some in the past) that unlocks the secret to low-pause GCs, but more of an indirect exploitation of the fact that the allocation rate is relatively low. The same design with much higher allocation rates would likely suffer an unacceptable hit to throughput.
I'm not really sure what you're trying to prove here. Java is definitely a great platform and has a cutting-edge GC. Nobody contests that. Go's GC is another point in the design space, starting with different constraints (low latency and non moving). This is what makes the ISMM keynote interesting.
Hudson explains they tried to switch to a generational GC, but for this they needed a write barrier. It was difficult to optimize the write barrier, by eliding it whenever possible, because Go is moving to a system where there is a GC safepoint at every instruction (this is because goroutines can be preempted at anytime, which is not a requirement for Java threads). In other words, the GC design is also constrained by the way goroutines work.
Hudson also explains that because Go relies a lot on value types, it makes espace analysis more effective, even without interprocedural analysis, which makes generational collection less effective than in other languages using a GC.
Keeping the allocation rate low is part of Go's GC design. A language with a "much higher allocation rate" would probably lead to a different design.
Thanks for the link to the presentation on ZGC! I'll watch it soon. But I saw the slide showing the performance goals and ZGC doesn't sounds a lot better than the numbers presented by Hudson for Go:
- "10 ms max pause time" for ZGC versus "two <500 microseconds STW pauses per GC" for Go
- "15 % max throughput reduction" for Java versus "25% of the CPU during GC cycle" for Go"
By the way, I also note that ZGC is "single generation".
As you wrote, they are new, and weren't available when the decision was made for the Go GC.
> The reason Go was able to achieve low latency with a relatively simple design is because 1. it suffers a hit to throughput and 2. that throughput hit, while significant, is not catastrophic because Go relies heavily on value types.
And are you saying these "new low-latency Java GCs" have no tradeoffs either?
I'm sorry, but we are discussing the keynote of the International Symposium on Memory Management, which is a recognized event in the field, and you are claiming things without any substantial material to offer. Maybe you're right, but I need more than vague assertions to be convinced :-)