Hacker News new | past | comments | ask | show | jobs | submit login

been using zfs on my home nas without ecc for well over a decade and never had any problems. i've seen people claiming this since before i started using zfs and it seems so unnecessary for some random home project.



Unless you've verified hashes of your files over time you may be having problems and not realizing it.


i've heard people say this, like i said, since before i started using zfs and i've never had an issue with a corrupted file. there's a few things that could be happening: i'm the luckiest person who has ever lived, these bit flip events don't happen nearly as often as people like to pretend they do, or when they do happen they aren't likely to be a big deal.


If all you have on your Nas is pirated movies, then yes

> when they do happen they aren't likely to be a big deal.

But with more sensitive data it might matter to you. Ram can go bad like hdds can, and without ecc you have no chance of telling. Zfs won't help you here if the bit flip happens in the page cache. The file will corrupt in ram and Zfs will happily calculate a checksum for that corrupted data and store that alongside the file.


I have some JPEGs with bit flips. I could tell because they display ugly artifacts at the point of the bit flip. (You can see the kind of artifacts I'm talking about here: https://xn--andreasvlker-cjb.de/2024/02/28/image-formats-bit...)

I'd happened to archive the files to CD-R's incidentally. I was able to compare those archived files to the ones that remained on my file server. There were bit flips randomly in some of the files.

After that happened I started hashing all of my files and comparing hashes when I migrate files during server upgrades. Prior to using ZFS I also periodically verified file hashes with a cheapo Perl script.


They did mention ZFS, so verified hashes of each file block. I hope they are scrubbing, and have at least one snapshot.


ZFS does nothing to protect you against RAM corrupting your data before ZFS sees it. All you'll end up with is a valid checksum of the now bad data.

You can Google more, but, I'll just leave this from the first page of the openZFS manual:

  Misinformation has been circulated that ZFS data integrity features are somehow worse than those of other filesystems when ECC RAM is not used. This is not the case: all software needs ECC RAM for reliable operation and ZFS is no different from any other filesystem in that regard.[1]
[1] https://openzfs.readthedocs.io/en/latest/introduction.html


Why would one snapshot help?


One snapshot would help because, if EVERYTHING collapses, and you need data recovery, the snapshot provides a basepoint for the recovery. This should allow better recovery of metadata. Not that this should EVER happen -- it is just a good idea. I use Jim Salter's syncoid/sanoid to make snapshots, age them out, and send data to another pool.

I agree that ECC is a damn good idea - I use it on my home server. But, my lappy (i5 thinkpad) doesn't have it.


If a single byte flips in a 4-10GB video file, nobody will ever notice it.

There aren't that many cases where it actually matters.


I believe ZFS does periodic checksuming (scrubbing).


Strictly speaking I don't think ZFS itself does, but it is very common for distros to ship a cronjob that runs `zpool scrub` on a schedule (often but not always default enabled).




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: