Skip to content

Instantly share code, notes, and snippets.

@pirate
Last active May 26, 2024 16:34
Show Gist options
  • Save pirate/265e19a8a768a48cf12834ec87fb0eed to your computer and use it in GitHub Desktop.
Save pirate/265e19a8a768a48cf12834ec87fb0eed to your computer and use it in GitHub Desktop.
Backup a docker-compose project, including all images, named and unnamed volumes, container filesystems, config, logs, and databases.
#!/usr/bin/env bash
### Bash Environment Setup
# http://redsymbol.net/articles/unofficial-bash-strict-mode/
# https://www.gnu.org/software/bash/manual/html_node/The-Set-Builtin.html
# set -o xtrace
set -o errexit
set -o errtrace
set -o nounset
set -o pipefail
IFS=$'\n'
# Fully backup a docker-compose project, including all images, named and unnamed volumes, container filesystems, config, logs, and databases.
project_dir="${1:-$PWD}"
if [ -f "$project_dir/docker-compose.yml" ]; then
echo "[i] Found docker-compose config at $project_dir/docker-compose.yml"
else
echo "[X] Could not find a docker-compose.yml file in $project_dir"
exit 1
fi
project_name=$(basename "$project_dir")
backup_time=$(date +"%Y-%m-%d_%H-%M")
backup_dir="$project_dir/data/backups/$backup_time"
# Source any needed environment variables
[ -f "$project_dir/docker-compose.env" ] && source "$project_dir/docker-compose.env"
[ -f "$project_dir/.env" ] && source "$project_dir/.env"
echo "[+] Backing up $project_name project to $backup_dir"
mkdir -p "$backup_dir"
echo " - Saving docker-compose.yml config"
cp "$project_dir/docker-compose.yml" "$backup_dir/docker-compose.yml"
# Optional: pause the containers before backing up to ensure consistency
# docker compose pause
# Optional: run a command inside the contianer to dump your application's state/database to a stable file
echo " - Saving application state to ./dumps"
mkdir -p "$backup_dir/dumps"
# your database/stateful service export commands to run inside docker go here, e.g.
# docker compose exec postgres env PGPASSWORD="$POSTGRES_PASSWORD" pg_dump -U "$POSTGRES_USER" "$POSTGRES_DB" | gzip -9 > "$backup_dir/dumps/$POSTGRES_DB.sql.gz"
# docker compose exec redis redis-cli SAVE
# docker compose exec redis cat /data/dump.rdb | gzip -9 > "$backup_dir/dumps/redis.rdb.gz"
for service_name in $(docker compose config --services); do
image_id=$(docker compose images -q "$service_name")
image_name=$(docker image inspect --format '{{json .RepoTags}}' "$image_id" | jq -r '.[0]')
container_id=$(docker compose ps -q "$service_name")
service_dir="$backup_dir/$service_name"
echo "[*] Backing up ${project_name}__${service_name} to ./$service_name..."
mkdir -p "$service_dir"
# save image
echo " - Saving $image_name image to ./$service_name/image.tar"
docker save --output "$service_dir/image.tar" "$image_id"
if [[ -z "$container_id" ]]; then
echo " - Warning: $service_name has no container yet."
echo " (has it been started at least once?)"
continue
fi
# save config
echo " - Saving container config to ./$service_name/config.json"
docker inspect "$container_id" > "$service_dir/config.json"
# save logs
echo " - Saving stdout/stderr logs to ./$service_name/docker.{out,err}"
docker logs "$container_id" > "$service_dir/docker.out" 2> "$service_dir/docker.err"
# save data volumes
mkdir -p "$service_dir/volumes"
for source in $(docker inspect -f '{{range .Mounts}}{{println .Source}}{{end}}' "$container_id"); do
volume_dir="$service_dir/volumes$source"
echo " - Saving $source volume to ./$service_name/volumes$source"
mkdir -p $(dirname "$volume_dir")
cp -a -r "$source" "$volume_dir"
done
# save container filesystem
echo " - Saving container filesystem to ./$service_name/container.tar"
docker export --output "$service_dir/container.tar" "$container_id"
# save entire container root dir
echo " - Saving container root to $service_dir/root"
cp -a -r "/var/lib/docker/containers/$container_id" "$service_dir/root"
done
echo "[*] Compressing backup folder to $backup_dir.tar.gz"
tar -zcf "$backup_dir.tar.gz" --totals "$backup_dir" && rm -Rf "$backup_dir"
echo "[√] Finished Backing up $project_name to $backup_dir.tar.gz."
# Resume the containers if paused above
# docker compose unpause
@SilverJan
Copy link

Nice script. What doesn't work properly is the execution of the script from another directory, since the docker-compose calls are missing the -p option.

@pirate
Copy link
Author

pirate commented Jan 14, 2023

It's not intended to work from another directory, you must place the script inside the dir that contains your docker-compose.yml file.

@homonto
Copy link

homonto commented Jan 14, 2023

I am very fresh to docker and compose as well, but I managed to offload my Home Assistant with some add ons using another linux machine with docker compose
currently I am running there:

docker compose config --services
chrony
duckdns
grafana
mariadb
mosquitto
node-red
phpmyadmin
pihole
timemachine
wireguard

now, I am trying to find some good backup solution. and this is how I landed here.
I took your script, modified where needed "docker-compose" to "docker compose" and voila - all works.
But I have 3 questions:
1- my "timemachine" container uses "/mnt/timemachine" volume where the data is stored - apparently we are talking GB here, so I would like to exclude this container from the backup script - is there any way here to exclude it?
2- when database is backed up apparently it would be nice to stop it, right? not necessarily other containers - is there any easy way to do it in this script?
3- in case I stop the container (docker compose stop mariadb) - would simple command: "tar cfz maria.tar.gz /srv/docker/mariadb" be enough to really have EVERYTHING backed up for this container (of course, also the docker-compose.yml)?
what are all these "dumps", "logs" etc - isn't everything in the volume itself? provided my only volumes are inside the same folder, in this example /srv/docker/mariadb

thank you for your help ;)

@pirate
Copy link
Author

pirate commented May 31, 2023

  1. to exclude a container you'd modify this line to add a filter to the list of containers it loops through, e.g.
- for service_name in $(docker compose config --services); do
+ for service_name in $(docker compose config --services | grep -v timemachine); do
  1. this is up to you, pgdump and redis SAVE don't require you pause the container to ensure consistency, but if you need to pause your other containers to ensure consistency at the application layer, I did mention that in these lines already:
# Optional: pause the containers before backing up to ensure consistency
# docker compose pause
  1. no, that only backs up some of that containers state, there is other hidden state in the docker system like remote mounts, stdout/stderr log output, container config and environment state, etc. which is why this script exists in the first place. you can read the state it generates on :67, :71, etc. to see how it's separate from the volume contents

@erictcarter
Copy link

@pirate i need your help, i customized the script and executed the one you shared as is, and my docker is gone, how do i reverse this

@GeoHolz
Copy link

GeoHolz commented Nov 3, 2023

Awsome script, could you edit it so you can add environment variables for containers in the docker compose file so you can specify if you want to make backups of the attached volumes or not? For Example when backing-up Plex i dont want to backup the volume that contains my music and movies.

I add this feature : https://gist.github.com/GeoHolz/ee9362c82ee13f8a5690d86d6ec7bb0c
Thanks pirate !

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment