Hacker News new | past | comments | ask | show | jobs | submit login

Another way to recap the paper is this: run small clusters at your PoPs, aggregate the results of those clusters and replicate back to an upstream cluster with eventual consistency. The PoP clusters throttle down their replication under high load. They also scale small clusters of PoPs at specific times, which saves them money at the same time as dealing with traffic spikes.

When you have a flood of writes, sometimes those writes are identical or nearly identical, or the data in each write barely changes. It makes no sense to flood those writes upstream, because since it's barely changing, you may not need it that urgently. Throttling lets you simply move the changes back with eventual consistency.




Not just aggregating writes, but merging them.

It seems that high load high scale solutions [1] [2] start to converge on this idea of merging updates and eventual consistency.

[1] https://arxiv.org/pdf/1708.06423.pdf

[2] https://databeta.wordpress.com/2018/03/09/anna-kvs/




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: