Skip to content

Instantly share code, notes, and snippets.

@lcguida
Last active June 4, 2018 12:48
Show Gist options
  • Star 1 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save lcguida/d83e8bfddc02b613f424f41effb1751f to your computer and use it in GitHub Desktop.
Save lcguida/d83e8bfddc02b613f424f41effb1751f to your computer and use it in GitHub Desktop.
Creating a postgresql docker image from production database
#!/bin/bash
# Fail script if a command fails
set -x
# Grab the latest postgresql:9.4-alpine image
docker pull postgres:9.4-alpine
# We will name our container as `pg_tmp`, so we will
# make sure that no container with this name is running
for container_id in $(docker ps -qf "name=pg_tmp"); do
docker stop $container_id
done
# We will also kill any containers with this name
for container_id in $(docker ps -aqf "name=pg_tmp"); do
docker rm $container_id
done
# Run a container, with postgresql:9.4-alpine as base.
# * Use port 5433 so it won't conflict with the current postgresql install
# * name the container so it's easier to manipulate it.
# * change default PGDATA because the original image will create a volume
# by default. We want the data to remain in the container.
docker run \
--rm \
-p 5433:5432 \
--name pg_tmp \
-d \
-e POSTGRES_USER=user \
-e POSTGRES_DB=my_database \
-e POSTGRES_PASSWORD=passwd \
-e PGDATA=/var/lib/postsgresql2/data \
postgres:9.4-alpine
# Retrieve container id
pg_container_id=$(docker ps -qf "name=pg_tmp" | xargs)
# Sanity check : is container really running ?
if [[ ! -n $pg_container_id ]]; then
echo "Container is not running. Exiting."
exit 1
fi
# NOTE:
# Password won't be asked because we have created a ~/.pgpass
# file with it. See https://www.postgresql.org/docs/9.4/static/libpq-pgpass.html
# for more details.
dump_file=$HOME/docker_db/dump/no_data.dump # Path to dump file
structure_file=$HOME/current/db/structure.sql # Rails database structure
rm -f $dump_file
# Create a database dump ignoring some tables (unecessary data). This could be a full dump
# form your database if it isn't huge.
pg_dump -v \
-U production_user \
--data-only \
-T "*_logs" -T "pghero*"\
-Fc production_db_name > $dump_file
# Create the database structure in the docker container
psql -h 0.0.0.0 -p 5433 -U user my_database < $structure_file
# Restore database data
pg_restore -v \
-h 0.0.0.0 \
-p 5433 \
-U user \
--no-owner \
--role=user \
-Fc --dbname=my_database \
--data-only \
$dump_file
# Commit docker image, tagging your registry address
docker commit $pg_container_id registry.example.com:5000/my_repo/my_image
# Push image to registry
docker push registry.example.com:5000/my_repo/my_image
# Stop container
docker stop $pg_container_id
exit 0
# Now we can easily pull the image:
$ docker pull registry.example.com:5000/my_repo/my_image
# And run a database with all necessary data locally:
$ docker run --name my_db -p 5433:5432 registry.example.com:5000/my_repo/my_image
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment