Hacker News new | past | comments | ask | show | jobs | submit login

The vision of http://sandstorm.io ties well into this strategy (not affiliated, I just like the idea).



They put databases and store data in containers. What a bad idea.


Why?


Containers are supposed to be throwaway; if you need to change something inside, you rebuild it from scratch and redeploy.

Hopefully, your data should not be throwaway. The common architecture is to have an application server inside container that connects to database or other persistent storage running outside containers.


Uhm. I'm not using containers much so I'm not up-to-date with best practices, but I recall a solution involving a "shared volume" for containerazed database to store data in. Is this approach wrong?


Yes, because when (not if) your container crashes, your data is gone.


Nope, that is the problem that mounts and shared volumes solve.

At that point you can argue there is no point in using a container, but your statement is false.


Mounts and shared volumes are fine, if you can guarantee that the container is going to be scheduled on this specific machine, where the given filesystem lives.

If you can't guarantee that, you are going into the world of NFS (which databases do not like much) or iSCSI, or, if you have distributed storage, into the world of glusterfs, ceph or something similar.

It's much simpler to just set up a database server (or cluster) and live with that.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: