Instantly share code, notes, and snippets.

@aktau /bench.sh
Last active Oct 24, 2018

Embed
What would you like to do?
Benchmark which method plays best with pg_dump for rsyncability
#!/bin/bash
# look out for the speedup factors reported, my
# tests indicate that `pigz --rsyncable` wins for
# the compressed format, but that the raw uncompressed
# format is the best if you want optimal rsyncability.
#
# This makes sense of course, but for many of us it's
# not feasible to keep uncompressed copies around, as the
# database can be huge and full of compressible data.
#
# NOTES
# you'll have to change the HOST variable below to
# point to the host + folder to fetch the files from
#
# EXAMPLE
# # first run the gen.sh script on the machine with
# # the pg install then do:
# $ bench.sh orig changed
set -e
set -u
HOST="vagrant:/home/vagrant/pgdumptests"
ORIG="$1"
NEW="$2"
rsync -avh --stats --progress $HOST/$ORIG/ data/
rsync -avh --stats --progress $HOST/$NEW/m2d.arsync.dump.gz data/
rsync -avh --stats --progress $HOST/$NEW/m2d.compr.dump data/
rsync -avh --stats --progress $HOST/$NEW/m2d.gzip.dump.gz data/
rsync -avh --stats --progress $HOST/$NEW/m2d.pigz.dump.gz data/
rsync -avh --stats --progress $HOST/$NEW/m2d.raw.dump data/
#!/bin/bash
# use it like this:
# $ gen.sh orig
# $ psql <mydb>
# make some changes to the database, try to make a small change in the largest tables
# $ gen.sh changed
# afterwards, run the bench.sh script from another host (I took the easy route and ran
# bench.sh on my virtual host and gen.sh on a virtual muachine)
DIR="$1"
[ -d "$DIR" ] || mkdir -p "$DIR"
# -Z0 is to force no compression, we supply this flag when we pipe to
# our own compressor
pg_dump -Fc m2d > "$DIR/m2d.compr.dump"
pg_dump -Z0 -Fc m2d > "$DIR/m2d.raw.dump"
pg_dump -Z0 -Fc m2d | pigz > "$DIR/m2d.pigz.dump.gz"
pg_dump -Z0 -Fc m2d | gzip > "$DIR/m2d.gzip.dump.gz"
pg_dump -Z0 -Fc m2d | pigz --rsyncable > "$DIR/m2d.arsync.dump.gz"
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment