Hello Self-Hosters,
What is the best practice for backing up data from docker as a self-hoster looking for ease of maintenance and foolproof backups? (pick only one :D )
Assume directories with user data are mapped to a NAS share via NFS and backups are handled separately.
My bigger concern here is how do you handle all the other stuff that is stored locally on the server, like caches, databases, etc. The backup target will eventually be the NAS and then from there it’ll be double-backed up to externals.
-
Is it better to run
cp /var/lib/docker/volumes/* /backupLocation
every once in a while, or is it preferable to define mountpoints for everything inside of/home/user/Containers
and then use a script to sync it to wherever you keep backups? What pros and cons have you seen or experienced with these approaches? -
How do you test your backups? I’m thinking about digging up an old PC to use to test backups. I assume I can just edit the ip addresses in the docker compose, mount my NFS dirs, and failover to see if it runs.
-
I started documenting my system in my notes and making a checklist for what I need to backup and where it’s stored. Currently trying to figure out if I want to move some directories for consistency. Can I just do
docker-compose down
edit the mountpoints indocker-compose.yml
and rundocker-compose up
to get a working system?
I’d say that the most important takeover of this approach is to stop all the containers before the backup. Some applications (like databases) are extremely sensitive to data corruption. If you simply ´cp´ while they are running you may copy files of the same program at different point in time and get a corrupted backup. It is also important mentioning that a backup is good only if you verify that you can restore it. There are so many issues you can discover the first time you recover a backup, you want to be sure you discover them when you still have the original data.