Hacker News new | past | comments | ask | show | jobs | submit login

> Page-level Compression ... but I am fine with relying on ZFS filesystem compression

There are compromises there. With page level compression the pages usually remain compressed in memory as well as on-disk and get unpacked on access (and repacked on change). This eats CPU time, but saves RAM as well as disk space which for some workloads is a beneficial trade-off as a larger common working set fits into the same amount of active memory (particularly when IO is slow, such as traditional spinning disks, cloud providers when you aren't paying a fortune, or when competing with a lot of IO or network contention on a busy LAN/SAN).

With filesystem compression you don't get the same CPU hit every time the page is read, and depending on your data you may get better compression for a couple of reasons, but you don't get the RAM saving.




ZFS ARC actually works the same way, i.e. the records would remain compressed in memory.

When running PostgreSQL, I would probably go with `primarycache=all` for the data dir, a largish ARC cache, and a smallish shared buffers, to take advantage of the filesystem compression.

When running MySQL, I would probably go with `primarycache=metadata` for the data dir, a smallish ARC cache, and a largish buffer pool, to still benefit (slightly) from the filesystem compression.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: