Containers are supposed to be throwaway; if you need to change something inside, you rebuild it from scratch and redeploy.
Hopefully, your data should not be throwaway. The common architecture is to have an application server inside container that connects to database or other persistent storage running outside containers.
Uhm. I'm not using containers much so I'm not up-to-date with best practices, but I recall a solution involving a "shared volume" for containerazed database to store data in. Is this approach wrong?
Mounts and shared volumes are fine, if you can guarantee that the container is going to be scheduled on this specific machine, where the given filesystem lives.
If you can't guarantee that, you are going into the world of NFS (which databases do not like much) or iSCSI, or, if you have distributed storage, into the world of glusterfs, ceph or something similar.
It's much simpler to just set up a database server (or cluster) and live with that.