Hacker News new | past | comments | ask | show | jobs | submit login

With a cloud solution, how are you not one bug or sysadmin failure away from data-loss or leaks?



You avoid backup software bugs by using object lock. Even if your backup software messes up, it cannot remove older backups. On the cloud-side, sure bad things can happen, but a data loss event through a sysadmin or hardware failure is far less likely to happen on e.g. Amazon S3 or Backblaze B2 than with a home Raspberry Pi with a consumer-grade SSD taped on top of it. And you are probably backing up to at least two different destinations (at different companies) if you care about your data.


> Even if your backup software messes up, it cannot remove older backups.

But what if that has a bug?

Multiple write-only DVDs are clearly the only option per your logic.

You also can't just add on backups indefinitely or your costs will also approach infinity given enough time. There has to be a mechanism for deleting things, be it on DVD or on object lock cloud hype.

Personally, I think a backup is just that: a reserve copy. It should be reliable, but so long as you test your backups regularly, you can be confident that there won't suddenly be a bug when the primary copy fails. I, too, like to have two independent backups instead of one (happened to me before that, due to a niche mechanical failure called little brother, an external backup drive failed very soon after the primary), but saying one shouldn't use normal sync software "because it's one bug/misclick away from erasure" is silly. There can always be bugs and misclicks. They even mentioned using a read-only mode. It's a matter of how certain you want to be, and most people don't have any (automated) backups in the first place. Syncthing or similar software wouldn't be (isn't) my choice of backup software either, but I wouldn't dismiss a simple solution that works fine for them.


You also can't just add on backups indefinitely or your costs will also approach infinity given enough time. There has to be a mechanism for deleting things, be it on DVD or on object lock cloud hype.

Object locks have a configurable expiration date.

But what if that has a bug?

Again, this is yet another typical HN discussion. We are now comparing a consumer grade SSD taped to a Raspberry PI without ECC memory to a theoretical bug that might be in S3's or B2's object lock implementation. They have stored petabytes of data and there are virtually no reports of data loss ever, nor has anyone bypassed object lock, even if it's a high-value target.


Depends on your cloud solution, but OneDrive has version history so you can just roll it back.


Unless OneDrive silently corrupts your files upon upload: https://github.com/OneDrive/onedrive-api-docs/issues/1577


Btrfs has snapshots too. That doesn't rule out bugs in it.

Making something someone else's problem doesn't magically make the problem go away.


Are we really having a debate on whether cloud storage with proper redundancy, reliable hardware with ECC memory and dedicated sysadmins is as vulnerable to mishaps as a consumer-grade SSD taped to a Raspberry Pi?

I hope we can at least agree that it is 3-4 orders of magnitude more likely that an SSD with btrfs and no RAID attached to a Pi to lose data within a given time period than, say S3?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: