Hacker News new | past | comments | ask | show | jobs | submit login

You've been voted down, but I'm curious - could you elaborate?



ZFS has built-in support for redundant disks (in fact, using a lower-level hardware or software RAID under ZFS is not recommended). This allows it to do fancy things like allocate files intelligently across stripes, rebuild a disk with only the data used by the file system (huge time saver), and be able to repair files on-the-fly that are damaged, or at worst, report what files cannot be repaired.


But it won't let you add disks to your array one-by-one, unlike linux software raid.


What gave you that impression? With `zpool attach` you can attach a device to an existing VDEV (single device, mirror, RAIDZ) to increase redundancy. With `zpool add` you can add a new VDEV to your pool to extend its capacity.

See https://illumos.org/man/1m/zpool


What he means is you can't add another disk in vdev. If you have a raidz with 3 disks, you can't add 4th one and rebalance. What you can do is either add another vdev to the pool, making it two+ striped raidz or you can replace your 3 disks with a bigger ones.

Reason for that is that adding a disk to vdev will require rebalancing everything and considering complexity of zfs, you better just make a new raidz with 4 disks and move data over, otherwise it might just take ages and complicate things a lot. This is obviously not ideal for a home user, but zfs was not created with a home users in mind.

Btrfs supposedly should let you do this, but their raid5/6 is still unstable - probably for a reason. In fact, this is not such a big deal. You can add disks to a zfs mirror (and it is real n-way mirror, not btrfs raid1 thing with multiple disks), you can add mirror vdevs to raid10 setup. If you are making raidz1/2/3 just make sure you understand that you can't expand it by adding more disks.


Hope this comment isn't too ignorant, but how does it compare to gfs2?


It doesn't. GFS2 is a clustered filesystem, ZFS is not. GFS2 is for sharing a single filesystem between multiple nodes, with a distributed lock manager to avoid corruption. It's complex to fix when it breaks, generally fairly slow and very few people need it - for most users, NFS (especially v4 and up) is a better way of accomplishing the same end goal.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: