Create a gist now

Instantly share code, notes, and snippets.

What would you like to do?
Backup MySQL to Amazon S3

Backup MySQL to Amazon S3

This is a simple way to backup your MySQL tables to Amazon S3 for a nightly backup - this is all to be done on your server :-)

Sister Document - Restore MySQL from Amazon S3 - read that next

1 - Install s3cmd

this is for Centos 5.6, see http://s3tools.org/repositories for other systems like ubuntu etc

# Install s3cmd
cd /etc/yum.repos.d/
wget http://s3tools.org/repo/CentOS_5/s3tools.repo
yum install s3cmd
# Setup s3cmd
s3cmd --configure
    # You’ll need to enter your AWS access key and secret key here, everything is optional and can be ignored :-)

2 - Add your script

Upload a copy of s3mysqlbackup.sh (it will need some tweaks for your setup), make it executable and test it

# Add the executable bit
chmod +x s3mysqlbackup.sh
# Run the script to make sure it's all tickety boo
./s3mysqlbackup.sh

3 - Run it every night with CRON

Assuming the backup script is stored in /var/www/s3mysqlbackup.sh we need to add a crontask to run it automatically:

# Edit the crontab
env EDITOR=nano crontab -e
    # Add the following lines:
    # Run the database backup script at 3am
    0 3 * * * bash /var/www/s3mysqlbackup.sh >/dev/null 2>&1

4 - Don't expose the script!

If for some reason you put this script in a public folder (not sure why you would do this), you should add the following to your .htaccess or httpd.conf file to prevent public access to the files:

### Deny public access to shell files
<Files *.sh>
    Order allow,deny
    Deny from all
</Files>
#!/bin/bash
# Based on https://gist.github.com/2206527
# Be pretty
echo -e " "
echo -e " . ____ . ______________________________"
echo -e " |/ \| | |"
echo -e "[| \e[1;31m♥ ♥\e[00m |] | S3 MySQL Backup Script v.0.1 |"
echo -e " |___==___| / © oodavid 2012 |"
echo -e " |______________________________|"
echo -e " "
# Basic variables
mysqlpass="ROOTPASSWORD"
bucket="s3://bucketname"
# Timestamp (sortable AND readable)
stamp=`date +"%s - %A %d %B %Y @ %H%M"`
# List all the databases
databases=`mysql -u root -p$mysqlpass -e "SHOW DATABASES;" | tr -d "| " | grep -v "\(Database\|information_schema\|performance_schema\|mysql\|test\)"`
# Feedback
echo -e "Dumping to \e[1;32m$bucket/$stamp/\e[00m"
# Loop the databases
for db in $databases; do
# Define our filenames
filename="$stamp - $db.sql.gz"
tmpfile="/tmp/$filename"
object="$bucket/$stamp/$filename"
# Feedback
echo -e "\e[1;34m$db\e[00m"
# Dump and zip
echo -e " creating \e[0;35m$tmpfile\e[00m"
mysqldump -u root -p$mysqlpass --force --opt --databases "$db" | gzip -c > "$tmpfile"
# Upload
echo -e " uploading..."
s3cmd put "$tmpfile" "$object"
# Delete
rm -f "$tmpfile"
done;
# Jobs a goodun
echo -e "\e[1;32mJobs a goodun\e[00m"

Thanks! This is the perfect tool i'm looking for!

Thanks for this wonderful script.
worked like a charm.

I was thinking of new feature to add a cyclic backups for example the user configures max number of backups.

@embedded1 here's what i do to keep around only the backups i care about.

keep 2 sunday backups

if [ date +%A == 'Sunday' ]; then
rm ls -t $store_dir/$db-Sunday*gz | awk 'NR > 2'

otherwise keep 5 backups

else
rm ls -t $store_dir/$db*gz | grep -v 'Sunday' | awk 'NR > 5'
fi

echo "sync'ing $store_dir/ to $bucket_dir/..."

tailing slashes required

s3cmd sync --delete-removed "$store_dir/" "$bucket_dir/"
echo "done"

You can just set up your S3 bucket to delete things after a certain time period, no?

Aside -- great, and thanks for posting this stuff! I was really dreading the thought of DYI.

mastef commented Jan 23, 2014

Was getting errors on a few selected databases over 5MB.
Setting the host_bucket to host_bucket = %(bucket)s.s3-external-3.amazonaws.com helped, as described here :
http://serverfault.com/questions/330453/s3cmd-fails-too-many-times/481045#481045

rujmah commented Feb 21, 2014

Awesome. On the button. Just what I needed. Thanks, @oodavid

This is so great. Saves me hours compared to having to fight with WordPress backup plugins

dud3 commented Jun 28, 2014

@oodavid and @mastef thanks a lot.

sam2de commented Jul 11, 2014

Perfect tool, Thank you!

@ghost

ghost commented Aug 19, 2014

am i forced to create IAM user and relative key to use this script?

mikhail commented Oct 3, 2014

Modified to create file per table and ask for password as stdin: https://gist.github.com/siminm/93f72d79323d2381d623

How can I can backup it if my database is in an external server, Google Cloud SQL?

Is there a way to update this script to stream the backup to S3, for the use-case where the DB backup is too large to fit on to the storage medium the script is being run on?

Thanks man. Neat !

Excellent. Thank you.

I've implemented a tool for this scenario in this Docker image: https://github.com/schickling/dockerfiles/tree/master/mysql-backup-s3

You can use it like this:

$ docker run -e S3_ACCESS_KEY_ID=key -e S3_SECRET_ACCESS_KEY=secret -e S3_BUCKET=my-bucket -e S3_PREFIX=backup -e MYSQL_USER=user -e MYSQL_PASSWORD=password -e MYSQL_HOST=localhost schickling/mysql-backup-s3

How to use this if I want to only backup a single database?

Thank man ...

If you want to use it with aws cli simply change this line
s3cmd put "$tmpfile" "$object"
With this
aws s3 cp "$tmpfile" "$object"

Thank you for the script

If you want to use it with aws cli simply change this line

s3cmd put "$tmpfile" "$object"

With this

aws s3 cp "$tmpfile" "$object"

Thank you for the script

what if you have multiple DB on the server?

urshofer commented Oct 2, 2015

Probably you'll need to add a path in the script, otherwise some binaries may not be found if run via cron:

PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin

Thanks for this awesome tip and script. I can successfully run the script manually on our Ubuntu Server. However when we have it run as a cron job it does not work. Here is what it does when run as a cron job.

Output from command /bin/sh /etc/mysql/aws/s3mysqlbackuo.sh ..

. ____ . ______________________________
|/ | | |
[| �[1;31m♥ ♥�[00m |] | S3 MySQL Backup Script v.0.1 |
|_==| / © oodavid 2012 |
|
_____________________________|

Dumping to �[1;32ms3://********/sql_backup/1444144678 - Tuesday 06 October 2015 @ 0817/�[00m
�[1;34mc1uhstem�[00m
creating �[0;35m/tmp/1444144678 - Tuesday 06 October 2015 @ 0817 - c1uhstem.sql.gz�[00m
uploading...
ERROR: /root/.s3cfg: No such file or directory
ERROR: Configuration file not available.
ERROR: Consider using --configure parameter to create one.

And here is our cronjob entry:

0 5-20 * * 1-5 /bin/sh /etc/mysql/aws/s3mysqlbackuo.sh

Any help on this will be greatly appreciated.

lamasbr commented Dec 1, 2015

thank you! worked like a charm.

If anyone is having trouble getting the awesome script to work in cron, here is my change to line 44 of the s3mysqlbackup.sh file.

I had to declare the absolute path to make it work with cron:

/usr/local/bin/s3cmd --config=/home/ubuntu/.s3cfg put "$tmpfile" "$object"

Note: I used the Python install instructions found in the INSTALL file

Seriously thanks. AWS does not support reducing an RDS's allocated storage after you created the instance. This allowed me to migrate all my data and import it into a new RDS instance with a reduced allocated storage size. Thanks again.

Awesome script! Thanks a ton.

Perfect. Thank you!

Karl456 commented Feb 15, 2016

Is it possible to run this on a local server to pull from a remote MySQL server?

UPDATE:

Change this line:

databases=mysql -u root -p$mysqlpass -e "SHOW DATABASES;" | tr -d "| " | grep -v "\(Database\|information_schema\|performance_schema\|mysql\|test\)"

to:

databases=mysql -h 127.0.0.1 -u root -p$mysqlpass -e "SHOW DATABASES;" | tr -d "| " | grep -v "\(Database\|information_schema\|performance_schema\|mysql\|test\)"

Change 127.0.0.1 to whatever IP address you want in order to backup remotely. Add the same -h 127.0.0.1 to the mysqldump command.

jalama commented Apr 6, 2016

modified to include the concept of a host (ie MYSLQ on another server) https://gist.github.com/jalama/b237e020884aa5c874e284ccf0f52095

Isn't working for me:

line 20: syntax error near unexpected token do line 20:for db in $databases; do

What am I doing wrong?

MrOlek commented Jun 16, 2016

The easiest way to backup your MySQL database to Amazon S3 automatically is to use this tool MySQLBackupFTP (http://mysqlbackupftp.com/).

mbrooks commented Jun 20, 2016

It really annoys me that this isn't in an actual git project. Mainly because I doesn't really have a workflow for reporting and fixing issues.

overint commented Jul 8, 2016

Works well, thanks!

gnr5 commented Aug 2, 2016

Just tried it and it works. However what if I want to keep only the latest day in S3, instead of a file for each day? Any solution for this?

MrRaph commented Aug 5, 2016

Thanks a lot ! :-)

@gnr5 : You can create deletion rule in your bucket's properties. You'll be able to delete files some days after they were uploaded ;)

This works perfectly.

However, if you need to backup multiple servers, you can consider using our newly launched tool: https://www.backupbird.com/

hlebarovcom commented Nov 5, 2016

Great script! I have been inspired from it to create a simple script to backup website files to Amazon S3. I've uploaded it here for everyone to use - https://github.com/hlebarov-com/s3websitesbackup

Cheers!

Awesome script! works amazingly well.

craigderington commented Feb 15, 2017

Great job. This script rocks! Edited: 5-11-2017: OK, I finally resolved "Connection Reset by Peer" error by changing the S3CMD command from PUT to SYNC. (~ line 44) Now, it works flawlessly. Thanks very much!

antisocial89 commented Mar 14, 2017

the grep was returning nothing for me... on the bright side I need to backup ALL of my databases, so I changed:

databases=mysql -u root -p$mysqlpass -e "SHOW DATABASES;" | tr -d "| " | grep -v "\(Database\|information_schema\|performance_schema\|mysql\|test\)"

to

databases=`mysql -u root -p$mysqlpass -e "SHOW DATABASES;" | tr -d "| "

And everything is peachy.

amrutaDesai commented Apr 12, 2017

Very nice script.
I faced the problem while running the script through crontab, denied to edit root level crontab as same is running on virtual box and without -u in below command nothing happens.
so need to edit crontab for specific user ,
sudo crontab -u specificUser -e

Hence, edited crontab for specificuser for which script should be run and it worked for me

Great script! Exactly what i needed. Thank you very much!

cdtweb commented May 13, 2017

This idea will save me a bunch of money every month vs. using AWS RDS for my MySQL server.

Awesome !!!

Saver! Thank you! 👍

Thank you. Keep up the good work!!

buluxan commented Oct 31, 2017

i got this error when i try to run that command
A client error (RequestTimeTooSkewed) occurred when calling the ListBuckets operation: The difference between the request time and the current time is too large

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment