If I remember correctly, for ZFS it was some stuff about cache size and RAM usage, since we were in a VM and couldn't be too greedy. Something about a NVME drive needing slightly different settings. I didn't spend much time on this. For Postgres it was so so so many things, I played with it on and off for months, benchmarking certain key queries and such. And I read a LOT of blog posts about how to make Postgres and ZFS work together ideally, one thing I remember in particular was waffling back and forth about logbias=throughput vs logbias=latency. Just google "postgres zfs logbias" and you'll get MANY conflicting opinions. (edit: when I google it now in incognito, my reddit post is at the top lol)
But for what I actually changed, off the top of my head, I set the recordsizes in ZFS large enough that Postgres could safely (because of ZFS CoW) have full_page_writes off, and combined with synchronous_commit off, that really sped up the overall system and made the WAL logs much smaller. After looking just now at postgresql.conf, various other things were tweaked, such as seq_page_cost, random_page_cost, effective_cache_size, effective_io_concurrency, max_worker_processes, default_statistics_target, dynamic_shared_memory_type, work_mem, maintenance_work_mem, shared_buffers, but those were not quite as important. (plus some uninteresting tweaks to WAL behavior, since we had replicas that got WAL logs shipped every few minutes with rsync)
But for what I actually changed, off the top of my head, I set the recordsizes in ZFS large enough that Postgres could safely (because of ZFS CoW) have full_page_writes off, and combined with synchronous_commit off, that really sped up the overall system and made the WAL logs much smaller. After looking just now at postgresql.conf, various other things were tweaked, such as seq_page_cost, random_page_cost, effective_cache_size, effective_io_concurrency, max_worker_processes, default_statistics_target, dynamic_shared_memory_type, work_mem, maintenance_work_mem, shared_buffers, but those were not quite as important. (plus some uninteresting tweaks to WAL behavior, since we had replicas that got WAL logs shipped every few minutes with rsync)