Skip to content

Instantly share code, notes, and snippets.

@jeremyharris
Created July 10, 2012 19:41
Show Gist options
  • Star 1 You must be signed in to star a gist
  • Fork 3 You must be signed in to fork a gist
  • Save jeremyharris/3085738 to your computer and use it in GitHub Desktop.
Save jeremyharris/3085738 to your computer and use it in GitHub Desktop.
Quick bash script to backup mysql databases as separate files similar to how MySQL Workbench does
#!/bin/bash
#
# Script to backup all mysql tables as separate files
#
# $BACKUP_PATH/<date>/<db_name>/<host>_<table_name>.gz
# MySQL settings
MYSQL_USER="root"
MYSQL_PASS=""
MYSQL_HOST="localhost"
# Ignore these databases (space separated)
IGNORE="test"
# Backup path
BACKUP_PATH="/usr/backup/databases"
#
# End Configuration
#
# Paths to programs
MYSQL="$(which mysql)"
MYSQLDUMP="$(which mysqldump)"
CHOWN="$(which chown)"
CHMOD="$(which chmod)"
GZIP="$(which gzip)"
# Misc vars
HOST="$(hostname)"
NOW="$(date +"%d%m%Y")"
BACKUP_PATH+="/$NOW"
MYSQL_PERMS="-u $MYSQL_USER -h $MYSQL_HOST"
IGNORE+=" information_schema mysql performance_schema"
echo $BACKUP_PATH
if [ "$MYSQL_PASS" ] ; then
MYSQL_PERMS+=" -p$MYSQL_PASS"
fi
# Check backup path and create it if necessary
DBS="$($MYSQL $MYSQL_PERMS -Bse 'show databases')"
for DB in $DBS ; do
if [[ "$IGNORE" =~ "$DB" ]] ; then
echo "$(date +"%d-%m-%Y %H:%M:%S") Skipping $DB"
continue
fi
# create path if it doesn't exist
SUBPATH="$BACKUP_PATH/$DB"
[ ! -d $SUBPATH ] && mkdir -p $SUBPATH || :
# Change permissions so only root can access backups
$CHOWN 0.0 -R $SUBPATH
$CHMOD 0600 $SUBPATH
echo "$(date +"%d-%m-%Y %H:%M:%S") Backing up $DB to $SUBPATH"
TABLES="$($MYSQL $MYSQL_PERMS -Bse 'show tables from '$DB)"
for TABLE in $TABLES ; do
FILE="$SUBPATH/${HOST}_${TABLE}.sql.gz"
$MYSQLDUMP $MYSQL_PERMS $DB $TABLE | $GZIP -9 > $FILE
done
echo "$(date +"%d-%m-%Y %H:%M:%S") ...done!"
done
echo "$(date +"%d-%m-%Y %H:%M:%S") All done"
exit 0
@josegonzalez
Copy link

Why not use xtrabackup?

@jeremyharris
Copy link
Author

2 reasons.

  • I didn't know about it.
  • This was also an exercise

It looks like xtrabackup backs up the physical data files which I've read can lead to corruption problems (which is why I stuck with dumps).

@josegonzalez
Copy link

Trust, but always verify.

xtrabackup is pretty good about what it does, but I would always recommend verifying backups regardless. On AWS, we do:

  • FLUSH LOCAL TABLES; FLUSH TABLES WITH READ LOCK
  • xfs_freeze -f
  • snapshot via ebs
  • xfs_freeze -u
  • UNLOCK TABLES

Works pretty well.

@jeremyharris
Copy link
Author

How do you verify? Do you verify the actual db data (uncompress and run each sql script) or the backup files? We're currently using this to backup the tables then backup-manager to copy to S3. Still learning this stuff...

@josegonzalez
Copy link

For the steps I posted above, we use the snapshot to create a new volume and replace our current staging with it. If staging is not up to date once a night, I know the backup failed.

@ryanjanborja
Copy link

I'm doing this

TABLES="$(mysql -h "$HOST" -u "$USERNAME" -p"$PASSWORD" -P "$PORT" -Bse 'SHOW TABLES FROM '$DB)"
for TABLE in $TABLES; do
    $(mysqldump -h "$HOST" -u "$USERNAME" -p"$PASSWORD" -P "$PORT" $DB $TABLE | gzip -9 > "../$STORAGE_FOLDER/$DB/${DB}_${TABLE}_`date +"%Y-%m-%d_%H%M%S"`.sql.gz")
done

but I got error filename is too long. The $TABLE is printing all instead of 1 by 1.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment