Skip to content

Instantly share code, notes, and snippets.

@pirate
Last active December 6, 2024 09:54
Show Gist options
  • Save pirate/265e19a8a768a48cf12834ec87fb0eed to your computer and use it in GitHub Desktop.
Save pirate/265e19a8a768a48cf12834ec87fb0eed to your computer and use it in GitHub Desktop.
Backup a docker-compose project, including all images, named and unnamed volumes, container filesystems, config, logs, and databases.
#!/usr/bin/env bash
### Bash Environment Setup
# http://redsymbol.net/articles/unofficial-bash-strict-mode/
# https://www.gnu.org/software/bash/manual/html_node/The-Set-Builtin.html
# set -o xtrace
set -o errexit
set -o errtrace
set -o nounset
set -o pipefail
IFS=$'\n'
# Fully backup a docker-compose project, including all images, named and unnamed volumes, container filesystems, config, logs, and databases.
project_dir="${1:-$PWD}"
if [ -f "$project_dir/docker-compose.yml" ]; then
echo "[i] Found docker-compose config at $project_dir/docker-compose.yml"
else
echo "[X] Could not find a docker-compose.yml file in $project_dir"
exit 1
fi
project_name=$(basename "$project_dir")
backup_time=$(date +"%Y-%m-%d_%H-%M")
backup_dir="$project_dir/data/backups/$backup_time"
# Source any needed environment variables
[ -f "$project_dir/docker-compose.env" ] && source "$project_dir/docker-compose.env"
[ -f "$project_dir/.env" ] && source "$project_dir/.env"
echo "[+] Backing up $project_name project to $backup_dir"
mkdir -p "$backup_dir"
echo " - Saving docker-compose.yml config"
cp "$project_dir/docker-compose.yml" "$backup_dir/docker-compose.yml"
# Optional: pause the containers before backing up to ensure consistency
# docker compose pause
# Optional: run a command inside the contianer to dump your application's state/database to a stable file
echo " - Saving application state to ./dumps"
mkdir -p "$backup_dir/dumps"
# your database/stateful service export commands to run inside docker go here, e.g.
# docker compose exec postgres env PGPASSWORD="$POSTGRES_PASSWORD" pg_dump -U "$POSTGRES_USER" "$POSTGRES_DB" | gzip -9 > "$backup_dir/dumps/$POSTGRES_DB.sql.gz"
# docker compose exec redis redis-cli SAVE
# docker compose exec redis cat /data/dump.rdb | gzip -9 > "$backup_dir/dumps/redis.rdb.gz"
for service_name in $(docker compose config --services); do
image_id=$(docker compose images -q "$service_name")
image_name=$(docker image inspect --format '{{json .RepoTags}}' "$image_id" | jq -r '.[0]')
container_id=$(docker compose ps -q "$service_name")
service_dir="$backup_dir/$service_name"
echo "[*] Backing up ${project_name}__${service_name} to ./$service_name..."
mkdir -p "$service_dir"
# save image
echo " - Saving $image_name image to ./$service_name/image.tar"
docker save --output "$service_dir/image.tar" "$image_id"
if [[ -z "$container_id" ]]; then
echo " - Warning: $service_name has no container yet."
echo " (has it been started at least once?)"
continue
fi
# save config
echo " - Saving container config to ./$service_name/config.json"
docker inspect "$container_id" > "$service_dir/config.json"
# save logs
echo " - Saving stdout/stderr logs to ./$service_name/docker.{out,err}"
docker logs "$container_id" > "$service_dir/docker.out" 2> "$service_dir/docker.err"
# save data volumes
mkdir -p "$service_dir/volumes"
for source in $(docker inspect -f '{{range .Mounts}}{{println .Source}}{{end}}' "$container_id"); do
volume_dir="$service_dir/volumes$source"
echo " - Saving $source volume to ./$service_name/volumes$source"
mkdir -p $(dirname "$volume_dir")
cp -a -r "$source" "$volume_dir"
done
# save container filesystem
echo " - Saving container filesystem to ./$service_name/container.tar"
docker export --output "$service_dir/container.tar" "$container_id"
# save entire container root dir
echo " - Saving container root to $service_dir/root"
cp -a -r "/var/lib/docker/containers/$container_id" "$service_dir/root"
done
echo "[*] Compressing backup folder to $backup_dir.tar.gz"
tar -zcf "$backup_dir.tar.gz" --totals "$backup_dir" && rm -Rf "$backup_dir"
echo "[√] Finished Backing up $project_name to $backup_dir.tar.gz."
# Resume the containers if paused above
# docker compose unpause
@johntanner
Copy link

This is great! Does an equally convenient way to restore containers with their associated volumes exist?

@pirate
Copy link
Author

pirate commented Feb 27, 2020

Not equally convenient, I've only manually browsed the output so far to restore individual files or volumes a few times.

@JesseChisholm
Copy link

A safety net at line 33.5
[[ -z "$container_id" ]] && continue
Or, even echo a warning about a service that isn't currently running.

@pirate
Copy link
Author

pirate commented Sep 2, 2020

Fixed @JesseChisholm, thanks

@jslettengren
Copy link

How would one go about restoring one of the backups created by this convenient script?

@pirate
Copy link
Author

pirate commented Nov 12, 2020

How much do you need restored? @jslettengren if you only need the volume data it's trivial, just copy the stuff out of the volume data folder into a new dir and point your new container at it. If you need the image too then you'll have to load the image from the tar file docker load < some_backed_up_image.tar. If you need the whole container filesystem or root dir then you'll have to manually do some copy pasting in the running container to get it working. There's no one command solution because it depends on exactly what you need. Restoring too much could be potentially harmful or more complicated than needed, so just restore what you need.

@jslettengren
Copy link

jslettengren commented Nov 15, 2020

Thanks a lot @pirate

Right now I have no real need, other than I just want to know what to to do when the need arises :-)

To use the script, I run sudo bash docker-compose-backup.sh. I had to change in one of the cp commands from:
cp "$project_dir/docker-compose.yml" > "$backup_dir/docker-compose.yml"
to
cp "$project_dir/docker-compose.yml" "$backup_dir/docker-compose.yml"

Why was the > in there in the first place?

@whjvdijk
Copy link

whjvdijk commented Dec 16, 2020

Awsome script, could you edit it so you can add environment variables for containers in the docker compose file so you can specify if you want to make backups of the attached volumes or not? For Example when backing-up Plex i dont want to backup the volume that contains my music and movies.

@ppkliu
Copy link

ppkliu commented Feb 1, 2021

Thanks a lot @pirate
Would you give us a migration or restore example ??

@sephentos
Copy link

Thank you very much for this script!
A restore script would be greatly appreciated :)

@dilawar
Copy link

dilawar commented Jul 4, 2021

I am getting permission denied problem on an AWS machine. Do I need to run this script as admin? Or should I tweak the volumes key in my docker-compose.yml file?

Looks like these are from mariadb service.

cp: cannot access '/var/lib/docker/volumes/api_db-data/_data/mysql': Permission denied
cp: cannot access '/var/lib/docker/volumes/api_db-data/_data/performance_schema': Permission denied
cp: cannot open '/var/lib/docker/volumes/api_db-data/_data/multi-master.info' for reading: Permission denied
cp: cannot open '/var/lib/docker/volumes/api_db-data/_data/ib_buffer_pool' for reading: Permission denied
cp: cannot open '/var/lib/docker/volumes/api_db-data/_data/ibtmp1' for reading: Permission denied
cp: cannot open '/var/lib/docker/volumes/api_db-data/_data/ibtmp1' for reading: Permission denied

@derAlff
Copy link

derAlff commented Jul 9, 2021

Very very cool script! It is easy to use.

But restore with docker load < backupfile_from_docker_compose.tar.gz is not possible.
This command ends with the message open /var/lib/docker/tmp/docker-import-805572190/opt/json: no such file or directory. How i can solve this?

@pirate
Copy link
Author

pirate commented Aug 4, 2021

@necotec The top-level archive contains much more than just the image file. You have to uncompress the top-level .tar.gz to get the image file within, then pass the image file to docker load < ....

@Basti-Fantasti
Copy link

Basti-Fantasti commented Jan 19, 2022

Hi and thanks for the awsome script.

Can you add the note in the comment section, that jq needs to be installed?
I've just stumbled about this not being installed by default on my debian.

oh and maybe the hint that the container must be up and running for the backup 😄

Best regards
Bastian

@arifrhm
Copy link

arifrhm commented Apr 4, 2022

- Saving docker-compose.yml config
- Saving application state to ./dumps

time="2022-04-04T11:14:43+07:00" level=warning msg="The "REACT_APP_API_V1" variable is not set. Defaulting to a blank string."
time="2022-04-04T11:14:43+07:00" level=warning msg="network default: network.external.name is deprecated in favor of network.name"
time="2022-04-04T11:14:46+07:00" level=warning msg="The "REACT_APP_API_V1" variable is not set. Defaulting to a blank string."
time="2022-04-04T11:14:46+07:00" level=warning msg="network default: network.external.name is deprecated in favor of network.name"
docker-compose-backup.sh: line 51: jq: command not found
time="2022-04-04T11:14:49+07:00" level=error msg="write /dev/stdout: The pipe is being closed.\n"

@arifrhm
Copy link

arifrhm commented Apr 4, 2022

$ sh docker-compose-backup.sh
[i] Found docker-compose config at /g/BNS-dev-phase-2(30Mar2022)/ufe-bns/docker-compose.yml
[+] Backing up ufe-bns project to /g/BNS-dev-phase-2(30Mar2022)/ufe-bns/data/backups/2022-04-04_12-06
- Saving docker-compose.yml config
- Saving application state to ./dumps
time="2022-04-04T12:06:37+07:00" level=warning msg="The "REACT_APP_API_V1" variable is not set. Defaulting to a blank string."
time="2022-04-04T12:06:37+07:00" level=warning msg="network default: network.external.name is deprecated in favor of network.name"
time="2022-04-04T12:06:39+07:00" level=warning msg="The "REACT_APP_API_V1" variable is not set. Defaulting to a blank string."
time="2022-04-04T12:06:39+07:00" level=warning msg="network default: network.external.name is deprecated in favor of network.name"
time="2022-04-04T12:06:42+07:00" level=warning msg="The "REACT_APP_API_V1" variable is not set. Defaulting to a blank string."
time="2022-04-04T12:06:42+07:00" level=warning msg="network default: network.external.name is deprecated in favor of network.name"
[*] Backing up ufe-bns__api to ./api...
- Saving registry.gitlab.com/ihsansolusi/universal-front-end/api:latest image to ./api/image.tar
- Saving container config to ./api/config.json
- Saving stdout/stderr logs to ./api/docker.{out,err}
- Saving G:\BNS-dev-phase-2(30Mar2022)\ufe-bns\api\app volume to ./api/volumesG:\BNS-dev-phase-2(30Mar2022)\ufe-bns\api\app
- Saving container filesystem to ./api/container.tar
- Saving container root to /g/BNS-dev-phase-2(30Mar2022)/ufe-bns/data/backups/2022-04-04_12-06/api/root
cp: cannot stat '/var/lib/docker/containers/f7d4ec6770e19e07bc1cf716715bc16f60e3ffbdb4d5db3db546ee8f5800dc90': No such file or directory

@Joeshiett
Copy link

Don't use docker-compose installed with snap. This will result in the cp: cannot stat '/var/lib/docker/containers/f7d4e........ error. Install docker-compose the manual way.

@killmasta93
Copy link

Hi not sure if someone else has gotten this
bak.sh: 8: set: Illegal option -o errtrace

running on ubuntu 20 LTS server

@SilverJan
Copy link

Nice script. What doesn't work properly is the execution of the script from another directory, since the docker-compose calls are missing the -p option.

@pirate
Copy link
Author

pirate commented Jan 14, 2023

It's not intended to work from another directory, you must place the script inside the dir that contains your docker-compose.yml file.

@homonto
Copy link

homonto commented Jan 14, 2023

I am very fresh to docker and compose as well, but I managed to offload my Home Assistant with some add ons using another linux machine with docker compose
currently I am running there:

docker compose config --services
chrony
duckdns
grafana
mariadb
mosquitto
node-red
phpmyadmin
pihole
timemachine
wireguard

now, I am trying to find some good backup solution. and this is how I landed here.
I took your script, modified where needed "docker-compose" to "docker compose" and voila - all works.
But I have 3 questions:
1- my "timemachine" container uses "/mnt/timemachine" volume where the data is stored - apparently we are talking GB here, so I would like to exclude this container from the backup script - is there any way here to exclude it?
2- when database is backed up apparently it would be nice to stop it, right? not necessarily other containers - is there any easy way to do it in this script?
3- in case I stop the container (docker compose stop mariadb) - would simple command: "tar cfz maria.tar.gz /srv/docker/mariadb" be enough to really have EVERYTHING backed up for this container (of course, also the docker-compose.yml)?
what are all these "dumps", "logs" etc - isn't everything in the volume itself? provided my only volumes are inside the same folder, in this example /srv/docker/mariadb

thank you for your help ;)

@pirate
Copy link
Author

pirate commented May 31, 2023

  1. to exclude a container you'd modify this line to add a filter to the list of containers it loops through, e.g.
- for service_name in $(docker compose config --services); do
+ for service_name in $(docker compose config --services | grep -v timemachine); do
  1. this is up to you, pgdump and redis SAVE don't require you pause the container to ensure consistency, but if you need to pause your other containers to ensure consistency at the application layer, I did mention that in these lines already:
# Optional: pause the containers before backing up to ensure consistency
# docker compose pause
  1. no, that only backs up some of that containers state, there is other hidden state in the docker system like remote mounts, stdout/stderr log output, container config and environment state, etc. which is why this script exists in the first place. you can read the state it generates on :67, :71, etc. to see how it's separate from the volume contents

@erictcarter
Copy link

@pirate i need your help, i customized the script and executed the one you shared as is, and my docker is gone, how do i reverse this

@GeoHolz
Copy link

GeoHolz commented Nov 3, 2023

Awsome script, could you edit it so you can add environment variables for containers in the docker compose file so you can specify if you want to make backups of the attached volumes or not? For Example when backing-up Plex i dont want to backup the volume that contains my music and movies.

I add this feature : https://gist.github.com/GeoHolz/ee9362c82ee13f8a5690d86d6ec7bb0c
Thanks pirate !

@zirkuswurstikus
Copy link

bak.sh: 8: set: Illegal option -o errtrace
I assume you invoke the script with sh bak.sh. Be sure to use bash bak.sh

@JohnLines
Copy link

I have backed up other docker containers, but found Bookwyrm exposed some issues with others may also encounter - I am not fully there yet, so I will post in individual comments for each issue.

First a neat snippet from the Bookwyrm bw-dev script

if docker compose &> /dev/null ; then DOCKER_COMPOSE="docker compose" else DOCKER_COMPOSE="docker-compose" fi
and then use DOCKER_COMPOSE in the rest of the script, unifying for Debian based and other systems.

@JohnLines
Copy link

My .env had a variable (a passkey) which contained a $ (dollar) sign, which caused sourcing it to fail the shell could not find the variable which was the random string following the $

Worked around it for now, simply by commenting out that line in .env - to move on past it, but not an ideal situation - I know (as of very recently - not really a docker person) that secrets would be a better way, but this would be a big upstream change,

@JohnLines
Copy link

Bind volumes - attempting to back up Bind Volumes caused the script to fail -
adapted the save data volumes part to

# save data volumes
mkdir -p "$service_dir/volumes"
for typesource in $(docker inspect -f '{{range .Mounts}}{{println .Type .Source}}{{end}}' "$container_id"); do
type=$(echo $typesource | awk '{print $1}')
if [[ $type == "volume" ]]; then 
  source=$(echo $typesource | awk '{print $2}')
      volume_dir="$service_dir/volumes$source"
      echo "    - Saving $source volume to ./$service_name/volumes$source"
      mkdir -p $(dirname "$volume_dir")
      cp -a -r "$source" "$volume_dir"
 else
	  echo "    - not backing up volume $source with type $type"
fi
 done

@JohnLines
Copy link

One of the services did not have an image - protect against this case with

for service_name in $(docker-compose config --services); do
image_id=$(docker-compose images -q "$service_name")
if [[ -z "$image_id" ]]; then
echo " - Note: $service_name has no image"
else
image_name=$(docker image inspect --format '{{json .RepoTags}}' "$image_id" | jq -r '.[0]')
fi
container_id=$(docker-compose ps -q "$service_name")

and
# save image
if [[ ! -z "$image_id" ]]; then
echo " - Saving $image_name image to ./$service_name/image.tar"
docker save --output "$service_dir/image.tar" "$image_id"
fi

This enabled my docker-compose backup to complete - I will now copy it its migration target host and try a restore on there. Although I
take your point at https://gist.github.com/pirate/265e19a8a768a48cf12834ec87fb0eed#gistcomment-3525603, from experience I know that while backups are nice to have, what people really want is restores - and for that a backup is a required starting point.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment