Skip to content

Instantly share code, notes, and snippets.

Embed
What would you like to do?
Backup MySQL to Amazon S3

Backup MySQL to Amazon S3

This is a simple way to backup your MySQL tables to Amazon S3 for a nightly backup - this is all to be done on your server :-)

Sister Document - Restore MySQL from Amazon S3 - read that next

1 - Install s3cmd

this is for Centos 5.6, see http://s3tools.org/repositories for other systems like ubuntu etc

# Install s3cmd
cd /etc/yum.repos.d/
wget http://s3tools.org/repo/CentOS_5/s3tools.repo
yum install s3cmd
# Setup s3cmd
s3cmd --configure
    # You’ll need to enter your AWS access key and secret key here, everything is optional and can be ignored :-)

2 - Add your script

Upload a copy of s3mysqlbackup.sh (it will need some tweaks for your setup), make it executable and test it

# Add the executable bit
chmod +x s3mysqlbackup.sh
# Run the script to make sure it's all tickety boo
./s3mysqlbackup.sh

3 - Run it every night with CRON

Assuming the backup script is stored in /var/www/s3mysqlbackup.sh we need to add a crontask to run it automatically:

# Edit the crontab
env EDITOR=nano crontab -e
    # Add the following lines:
    # Run the database backup script at 3am
    0 3 * * * bash /var/www/s3mysqlbackup.sh >/dev/null 2>&1

4 - Don't expose the script!

If for some reason you put this script in a public folder (not sure why you would do this), you should add the following to your .htaccess or httpd.conf file to prevent public access to the files:

### Deny public access to shell files
<Files *.sh>
    Order allow,deny
    Deny from all
</Files>
#!/bin/bash
# Based on https://gist.github.com/2206527
# Be pretty
echo -e " "
echo -e " . ____ . ______________________________"
echo -e " |/ \| | |"
echo -e "[| \e[1;31m♥ ♥\e[00m |] | S3 MySQL Backup Script v.0.1 |"
echo -e " |___==___| / © oodavid 2012 |"
echo -e " |______________________________|"
echo -e " "
# Basic variables
mysqlpass="ROOTPASSWORD"
bucket="s3://bucketname"
# Timestamp (sortable AND readable)
stamp=`date +"%s - %A %d %B %Y @ %H%M"`
# List all the databases
databases=`mysql -u root -p$mysqlpass -e "SHOW DATABASES;" | tr -d "| " | grep -v "\(Database\|information_schema\|performance_schema\|mysql\|test\)"`
# Feedback
echo -e "Dumping to \e[1;32m$bucket/$stamp/\e[00m"
# Loop the databases
for db in $databases; do
# Define our filenames
filename="$stamp - $db.sql.gz"
tmpfile="/tmp/$filename"
object="$bucket/$stamp/$filename"
# Feedback
echo -e "\e[1;34m$db\e[00m"
# Dump and zip
echo -e " creating \e[0;35m$tmpfile\e[00m"
mysqldump -u root -p$mysqlpass --force --opt --databases "$db" | gzip -c > "$tmpfile"
# Upload
echo -e " uploading..."
s3cmd put "$tmpfile" "$object"
# Delete
rm -f "$tmpfile"
done;
# Jobs a goodun
echo -e "\e[1;32mJobs a goodun\e[00m"
@yogibear54

This comment has been minimized.

Copy link

commented Mar 22, 2013

Thanks! This is the perfect tool i'm looking for!

@embedded1

This comment has been minimized.

Copy link

commented Mar 23, 2013

Thanks for this wonderful script.
worked like a charm.

I was thinking of new feature to add a cyclic backups for example the user configures max number of backups.

@shinronin

This comment has been minimized.

Copy link

commented Apr 3, 2013

@embedded1 here's what i do to keep around only the backups i care about.

keep 2 sunday backups

if [ date +%A == 'Sunday' ]; then
rm ls -t $store_dir/$db-Sunday*gz | awk 'NR > 2'

otherwise keep 5 backups

else
rm ls -t $store_dir/$db*gz | grep -v 'Sunday' | awk 'NR > 5'
fi

echo "sync'ing $store_dir/ to $bucket_dir/..."

tailing slashes required

s3cmd sync --delete-removed "$store_dir/" "$bucket_dir/"
echo "done"

@RayParkerBassPlayer

This comment has been minimized.

Copy link

commented May 26, 2013

You can just set up your S3 bucket to delete things after a certain time period, no?

Aside -- great, and thanks for posting this stuff! I was really dreading the thought of DYI.

@mastef

This comment has been minimized.

Copy link

commented Jan 23, 2014

Was getting errors on a few selected databases over 5MB.
Setting the host_bucket to host_bucket = %(bucket)s.s3-external-3.amazonaws.com helped, as described here :
http://serverfault.com/questions/330453/s3cmd-fails-too-many-times/481045#481045

@rujmah

This comment has been minimized.

Copy link

commented Feb 21, 2014

Awesome. On the button. Just what I needed. Thanks, @oodavid

@jon-heller

This comment has been minimized.

Copy link

commented May 7, 2014

This is so great. Saves me hours compared to having to fight with WordPress backup plugins

@dud3

This comment has been minimized.

Copy link

commented Jun 28, 2014

@oodavid and @mastef thanks a lot.

@sam2de

This comment has been minimized.

Copy link

commented Jul 11, 2014

Perfect tool, Thank you!

@ghost

This comment has been minimized.

Copy link

commented Aug 19, 2014

am i forced to create IAM user and relative key to use this script?

@mikhail

This comment has been minimized.

Copy link

commented Oct 3, 2014

Modified to create file per table and ask for password as stdin: https://gist.github.com/siminm/93f72d79323d2381d623

@alfonmga

This comment has been minimized.

Copy link

commented Nov 3, 2014

How can I can backup it if my database is in an external server, Google Cloud SQL?

@kylegibson

This comment has been minimized.

Copy link

commented Dec 11, 2014

Is there a way to update this script to stream the backup to S3, for the use-case where the DB backup is too large to fit on to the storage medium the script is being run on?

@unnitallman

This comment has been minimized.

Copy link

commented Mar 12, 2015

Thanks man. Neat !

@sidis405

This comment has been minimized.

Copy link

commented Apr 16, 2015

Excellent. Thank you.

@schickling

This comment has been minimized.

Copy link

commented Apr 24, 2015

I've implemented a tool for this scenario in this Docker image: https://github.com/schickling/dockerfiles/tree/master/mysql-backup-s3

You can use it like this:

$ docker run -e S3_ACCESS_KEY_ID=key -e S3_SECRET_ACCESS_KEY=secret -e S3_BUCKET=my-bucket -e S3_PREFIX=backup -e MYSQL_USER=user -e MYSQL_PASSWORD=password -e MYSQL_HOST=localhost schickling/mysql-backup-s3
@mayankiiitm

This comment has been minimized.

Copy link

commented Jul 15, 2015

How to use this if I want to only backup a single database?

@rajvasani

This comment has been minimized.

Copy link

commented Jul 21, 2015

Thank man ...

@Corepany

This comment has been minimized.

Copy link

commented Jul 24, 2015

If you want to use it with aws cli simply change this line
s3cmd put "$tmpfile" "$object"
With this
aws s3 cp "$tmpfile" "$object"

Thank you for the script

@Corepany

This comment has been minimized.

Copy link

commented Jul 24, 2015

If you want to use it with aws cli simply change this line

s3cmd put "$tmpfile" "$object"

With this

aws s3 cp "$tmpfile" "$object"

Thank you for the script

@iamkingsleyf

This comment has been minimized.

Copy link

commented Sep 6, 2015

what if you have multiple DB on the server?

@urshofer

This comment has been minimized.

Copy link

commented Oct 2, 2015

Probably you'll need to add a path in the script, otherwise some binaries may not be found if run via cron:

PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin

@NextCodeLLC

This comment has been minimized.

Copy link

commented Oct 6, 2015

Thanks for this awesome tip and script. I can successfully run the script manually on our Ubuntu Server. However when we have it run as a cron job it does not work. Here is what it does when run as a cron job.

Output from command /bin/sh /etc/mysql/aws/s3mysqlbackuo.sh ..

. ____ . ______________________________
|/ | | |
[| �[1;31m♥ ♥�[00m |] | S3 MySQL Backup Script v.0.1 |
|_==| / © oodavid 2012 |
|
_____________________________|

Dumping to �[1;32ms3://********/sql_backup/1444144678 - Tuesday 06 October 2015 @ 0817/�[00m
�[1;34mc1uhstem�[00m
creating �[0;35m/tmp/1444144678 - Tuesday 06 October 2015 @ 0817 - c1uhstem.sql.gz�[00m
uploading...
ERROR: /root/.s3cfg: No such file or directory
ERROR: Configuration file not available.
ERROR: Consider using --configure parameter to create one.

And here is our cronjob entry:

0 5-20 * * 1-5 /bin/sh /etc/mysql/aws/s3mysqlbackuo.sh

Any help on this will be greatly appreciated.

@lamasbr

This comment has been minimized.

Copy link

commented Dec 1, 2015

thank you! worked like a charm.

@andrewvttran

This comment has been minimized.

Copy link

commented Dec 14, 2015

If anyone is having trouble getting the awesome script to work in cron, here is my change to line 44 of the s3mysqlbackup.sh file.

I had to declare the absolute path to make it work with cron:

/usr/local/bin/s3cmd --config=/home/ubuntu/.s3cfg put "$tmpfile" "$object"

Note: I used the Python install instructions found in the INSTALL file

@jay-johnson

This comment has been minimized.

Copy link

commented Jan 1, 2016

Seriously thanks. AWS does not support reducing an RDS's allocated storage after you created the instance. This allowed me to migrate all my data and import it into a new RDS instance with a reduced allocated storage size. Thanks again.

@carsoncole

This comment has been minimized.

Copy link

commented Jan 24, 2016

Awesome script! Thanks a ton.

@SteveConnelly

This comment has been minimized.

Copy link

commented Feb 2, 2016

Perfect. Thank you!

@Karl456

This comment has been minimized.

Copy link

commented Feb 15, 2016

Is it possible to run this on a local server to pull from a remote MySQL server?

UPDATE:

Change this line:

databases=mysql -u root -p$mysqlpass -e "SHOW DATABASES;" | tr -d "| " | grep -v "\(Database\|information_schema\|performance_schema\|mysql\|test\)"

to:

databases=mysql -h 127.0.0.1 -u root -p$mysqlpass -e "SHOW DATABASES;" | tr -d "| " | grep -v "\(Database\|information_schema\|performance_schema\|mysql\|test\)"

Change 127.0.0.1 to whatever IP address you want in order to backup remotely. Add the same -h 127.0.0.1 to the mysqldump command.

@jalama

This comment has been minimized.

Copy link

commented Apr 6, 2016

modified to include the concept of a host (ie MYSLQ on another server) https://gist.github.com/jalama/b237e020884aa5c874e284ccf0f52095

@marcolang

This comment has been minimized.

Copy link

commented Apr 15, 2016

Isn't working for me:

line 20: syntax error near unexpected token do line 20:for db in $databases; do

What am I doing wrong?

@MrOlek

This comment has been minimized.

Copy link

commented Jun 16, 2016

The easiest way to backup your MySQL database to Amazon S3 automatically is to use this tool MySQLBackupFTP (http://mysqlbackupftp.com/).

@mbrooks

This comment has been minimized.

Copy link

commented Jun 20, 2016

It really annoys me that this isn't in an actual git project. Mainly because I doesn't really have a workflow for reporting and fixing issues.

@overint

This comment has been minimized.

Copy link

commented Jul 8, 2016

Works well, thanks!

@gnr5

This comment has been minimized.

Copy link

commented Aug 2, 2016

Just tried it and it works. However what if I want to keep only the latest day in S3, instead of a file for each day? Any solution for this?

@MrRaph

This comment has been minimized.

Copy link

commented Aug 5, 2016

Thanks a lot ! :-)

@gnr5 : You can create deletion rule in your bucket's properties. You'll be able to delete files some days after they were uploaded ;)

@koldstar

This comment has been minimized.

Copy link

commented Aug 13, 2016

This works perfectly.

However, if you need to backup multiple servers, you can consider using our newly launched tool: https://www.backupbird.com/

@hlebarovcom

This comment has been minimized.

Copy link

commented Nov 5, 2016

Great script! I have been inspired from it to create a simple script to backup website files to Amazon S3. I've uploaded it here for everyone to use - https://github.com/hlebarov-com/s3websitesbackup

Cheers!

@prasoon-sipl

This comment has been minimized.

Copy link

commented Jan 27, 2017

This path is no longer valid - wget http://s3tools.org/repo/CentOS_5/s3tools.repo
Update this to - wget http://s3tools.org/repo/RHEL_5/s3tools.repo

@ghost

This comment has been minimized.

Copy link

commented Feb 2, 2017

Awesome script! works amazingly well.

@craigderington

This comment has been minimized.

Copy link

commented Feb 15, 2017

Great job. This script rocks! Edited: 5-11-2017: OK, I finally resolved "Connection Reset by Peer" error by changing the S3CMD command from PUT to SYNC. (~ line 44) Now, it works flawlessly. Thanks very much!

@antisocial89

This comment has been minimized.

Copy link

commented Mar 14, 2017

the grep was returning nothing for me... on the bright side I need to backup ALL of my databases, so I changed:

databases=mysql -u root -p$mysqlpass -e "SHOW DATABASES;" | tr -d "| " | grep -v "\(Database\|information_schema\|performance_schema\|mysql\|test\)"

to

databases=`mysql -u root -p$mysqlpass -e "SHOW DATABASES;" | tr -d "| "

And everything is peachy.

@amrutaDesai

This comment has been minimized.

Copy link

commented Apr 12, 2017

Very nice script.
I faced the problem while running the script through crontab, denied to edit root level crontab as same is running on virtual box and without -u in below command nothing happens.
so need to edit crontab for specific user ,
sudo crontab -u specificUser -e

Hence, edited crontab for specificuser for which script should be run and it worked for me

@reinvanhaaren

This comment has been minimized.

Copy link

commented Apr 14, 2017

Great script! Exactly what i needed. Thank you very much!

@cdtweb

This comment has been minimized.

Copy link

commented May 13, 2017

This idea will save me a bunch of money every month vs. using AWS RDS for my MySQL server.

@arpit1997

This comment has been minimized.

Copy link

commented Jun 21, 2017

Awesome !!!

@thecatontheflat

This comment has been minimized.

Copy link

commented Jul 13, 2017

Saver! Thank you! 👍

@karthik-ey

This comment has been minimized.

Copy link

commented Jul 22, 2017

Thank you. Keep up the good work!!

@buluxan

This comment has been minimized.

Copy link

commented Oct 31, 2017

i got this error when i try to run that command
A client error (RequestTimeTooSkewed) occurred when calling the ListBuckets operation: The difference between the request time and the current time is too large

@thapakazi

This comment has been minimized.

Copy link

commented Nov 19, 2017

@buluxan check your system time, its seems drifted to the time that s3 uses during that api call.
you can use ntpd or systemd's time-syncd to keep your clock in sync with ntp clocks.

@fatelgit

This comment has been minimized.

Copy link

commented Nov 20, 2017

Just wanted to add a quick note: In case your password contains any special characters, you have to use single quotes - otherwise you will get an error.

@aomega08

This comment has been minimized.

Copy link

commented Mar 26, 2018

Besides the script, that looks great, it's worth specifying that you may want to properly protect the target S3 bucket, eg: https://francescoboffa.com/using-s3-to-store-your-mysql-or-postgresql-backups

@simbo1905

This comment has been minimized.

Copy link

commented May 27, 2018

You probably want to encrypt the file that you are uploading into S3. Here is a version of the script that uses gpg for symmetric encryption of the file before it is uploaded:

https://github.com/simbo1905/s3-mysql-backup/blob/e13145720d98e2456f5393c6e5fab0418e77bb89/s3mysqlbackup.sh#L46

@parmarthgit

This comment has been minimized.

Copy link

commented Aug 31, 2018

Nice One
But

mysqldump --host=$HOST --user=$USER --password=$PASSWORD $DB_NAME --routines --single-transaction | gzip -9 | aws s3 cp - s3://bucket/database/filename.sql.gz

will directly store file to s3.

@tobidsn

This comment has been minimized.

Copy link

commented Sep 24, 2018

Backup MySQL to Amazon S3 Specific Table : https://gist.github.com/tobidsn/1abe9e75095a67f194c1cf6c00aac141

@NitsPatel1

This comment has been minimized.

Copy link

commented Dec 11, 2018

I have use above script .But I have not get automatically backup at 3.am. When I manually run script then I get backup.
also How to keep last 30 days backup. older than 30 days backup should be delete automatically with this script

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.