Skip to content

Instantly share code, notes, and snippets.

Last active September 4, 2023 21:45
Star You must be signed in to star a gist
What would you like to do?
Backup a docker-compose project, including all images, named and unnamed volumes, container filesystems, config, logs, and databases.
#!/usr/bin/env bash
### Bash Environment Setup
# set -o xtrace
set -o errexit
set -o errtrace
set -o nounset
set -o pipefail
# Fully backup a docker-compose project, including all images, named and unnamed volumes, container filesystems, config, logs, and databases.
if [ -f "$project_dir/docker-compose.yml" ]; then
echo "[i] Found docker-compose config at $project_dir/docker-compose.yml"
echo "[X] Could not find a docker-compose.yml file in $project_dir"
exit 1
project_name=$(basename "$project_dir")
backup_time=$(date +"%Y-%m-%d_%H-%M")
# Source any needed environment variables
[ -f "$project_dir/docker-compose.env" ] && source "$project_dir/docker-compose.env"
[ -f "$project_dir/.env" ] && source "$project_dir/.env"
echo "[+] Backing up $project_name project to $backup_dir"
mkdir -p "$backup_dir"
echo " - Saving docker-compose.yml config"
cp "$project_dir/docker-compose.yml" "$backup_dir/docker-compose.yml"
# Optional: pause the containers before backing up to ensure consistency
# docker compose pause
# Optional: run a command inside the contianer to dump your application's state/database to a stable file
echo " - Saving application state to ./dumps"
mkdir -p "$backup_dir/dumps"
# your database/stateful service export commands to run inside docker go here, e.g.
# docker compose exec postgres env PGPASSWORD="$POSTGRES_PASSWORD" pg_dump -U "$POSTGRES_USER" "$POSTGRES_DB" | gzip -9 > "$backup_dir/dumps/$POSTGRES_DB.sql.gz"
# docker compose exec redis redis-cli SAVE
# docker compose exec redis cat /data/dump.rdb | gzip -9 > "$backup_dir/dumps/redis.rdb.gz"
for service_name in $(docker compose config --services); do
image_id=$(docker compose images -q "$service_name")
image_name=$(docker image inspect --format '{{json .RepoTags}}' "$image_id" | jq -r '.[0]')
container_id=$(docker compose ps -q "$service_name")
echo "[*] Backing up ${project_name}__${service_name} to ./$service_name..."
mkdir -p "$service_dir"
# save image
echo " - Saving $image_name image to ./$service_name/image.tar"
docker save --output "$service_dir/image.tar" "$image_id"
if [[ -z "$container_id" ]]; then
echo " - Warning: $service_name has no container yet."
echo " (has it been started at least once?)"
# save config
echo " - Saving container config to ./$service_name/config.json"
docker inspect "$container_id" > "$service_dir/config.json"
# save logs
echo " - Saving stdout/stderr logs to ./$service_name/docker.{out,err}"
docker logs "$container_id" > "$service_dir/docker.out" 2> "$service_dir/docker.err"
# save data volumes
mkdir -p "$service_dir/volumes"
for source in $(docker inspect -f '{{range .Mounts}}{{println .Source}}{{end}}' "$container_id"); do
echo " - Saving $source volume to ./$service_name/volumes$source"
mkdir -p $(dirname "$volume_dir")
cp -a -r "$source" "$volume_dir"
# save container filesystem
echo " - Saving container filesystem to ./$service_name/container.tar"
docker export --output "$service_dir/container.tar" "$container_id"
# save entire container root dir
echo " - Saving container root to $service_dir/root"
cp -a -r "/var/lib/docker/containers/$container_id" "$service_dir/root"
echo "[*] Compressing backup folder to $backup_dir.tar.gz"
tar -zcf "$backup_dir.tar.gz" --totals "$backup_dir" && rm -Rf "$backup_dir"
echo "[√] Finished Backing up $project_name to $backup_dir.tar.gz."
# Resume the containers if paused above
# docker compose unpause
Copy link

homonto commented Jan 14, 2023

I am very fresh to docker and compose as well, but I managed to offload my Home Assistant with some add ons using another linux machine with docker compose
currently I am running there:

docker compose config --services

now, I am trying to find some good backup solution. and this is how I landed here.
I took your script, modified where needed "docker-compose" to "docker compose" and voila - all works.
But I have 3 questions:
1- my "timemachine" container uses "/mnt/timemachine" volume where the data is stored - apparently we are talking GB here, so I would like to exclude this container from the backup script - is there any way here to exclude it?
2- when database is backed up apparently it would be nice to stop it, right? not necessarily other containers - is there any easy way to do it in this script?
3- in case I stop the container (docker compose stop mariadb) - would simple command: "tar cfz maria.tar.gz /srv/docker/mariadb" be enough to really have EVERYTHING backed up for this container (of course, also the docker-compose.yml)?
what are all these "dumps", "logs" etc - isn't everything in the volume itself? provided my only volumes are inside the same folder, in this example /srv/docker/mariadb

thank you for your help ;)

Copy link

pirate commented May 31, 2023

  1. to exclude a container you'd modify this line to add a filter to the list of containers it loops through, e.g.
- for service_name in $(docker compose config --services); do
+ for service_name in $(docker compose config --services | grep -v timemachine); do
  1. this is up to you, pgdump and redis SAVE don't require you pause the container to ensure consistency, but if you need to pause your other containers to ensure consistency at the application layer, I did mention that in these lines already:
# Optional: pause the containers before backing up to ensure consistency
# docker compose pause
  1. no, that only backs up some of that containers state, there is other hidden state in the docker system like remote mounts, stdout/stderr log output, container config and environment state, etc. which is why this script exists in the first place. you can read the state it generates on :67, :71, etc. to see how it's separate from the volume contents

Copy link

@pirate i need your help, i customized the script and executed the one you shared as is, and my docker is gone, how do i reverse this

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment