Create a gist now

Instantly share code, notes, and snippets.

What would you like to do?
Backup MySQL to Amazon S3

Backup MySQL to Amazon S3

This is a simple way to backup your MySQL tables to Amazon S3 for a nightly backup - this is all to be done on your server :-)

Sister Document - Restore MySQL from Amazon S3 - read that next

1 - Install s3cmd

this is for Centos 5.6, see for other systems like ubuntu etc

# Install s3cmd
cd /etc/yum.repos.d/
yum install s3cmd
# Setup s3cmd
s3cmd --configure
    # You’ll need to enter your AWS access key and secret key here, everything is optional and can be ignored :-)

2 - Add your script

Upload a copy of (it will need some tweaks for your setup), make it executable and test it

# Add the executable bit
chmod +x
# Run the script to make sure it's all tickety boo

3 - Run it every night with CRON

Assuming the backup script is stored in /var/www/ we need to add a crontask to run it automatically:

# Edit the crontab
env EDITOR=nano crontab -e
    # Add the following lines:
    # Run the database backup script at 3am
    0 3 * * * bash /var/www/ >/dev/null 2>&1

4 - Don't expose the script!

If for some reason you put this script in a public folder (not sure why you would do this), you should add the following to your .htaccess or httpd.conf file to prevent public access to the files:

### Deny public access to shell files
<Files *.sh>
    Order allow,deny
    Deny from all
# Based on
# Be pretty
echo -e " "
echo -e " . ____ . ______________________________"
echo -e " |/ \| | |"
echo -e "[| \e[1;31m♥ ♥\e[00m |] | S3 MySQL Backup Script v.0.1 |"
echo -e " |___==___| / © oodavid 2012 |"
echo -e " |______________________________|"
echo -e " "
# Basic variables
# Timestamp (sortable AND readable)
stamp=`date +"%s - %A %d %B %Y @ %H%M"`
# List all the databases
databases=`mysql -u root -p$mysqlpass -e "SHOW DATABASES;" | tr -d "| " | grep -v "\(Database\|information_schema\|performance_schema\|mysql\|test\)"`
# Feedback
echo -e "Dumping to \e[1;32m$bucket/$stamp/\e[00m"
# Loop the databases
for db in $databases; do
# Define our filenames
filename="$stamp - $db.sql.gz"
# Feedback
echo -e "\e[1;34m$db\e[00m"
# Dump and zip
echo -e " creating \e[0;35m$tmpfile\e[00m"
mysqldump -u root -p$mysqlpass --force --opt --databases "$db" | gzip -c > "$tmpfile"
# Upload
echo -e " uploading..."
s3cmd put "$tmpfile" "$object"
# Delete
rm -f "$tmpfile"
# Jobs a goodun
echo -e "\e[1;32mJobs a goodun\e[00m"

Thanks! This is the perfect tool i'm looking for!

Thanks for this wonderful script.
worked like a charm.

I was thinking of new feature to add a cyclic backups for example the user configures max number of backups.

@embedded1 here's what i do to keep around only the backups i care about.

keep 2 sunday backups

if [ date +%A == 'Sunday' ]; then
rm ls -t $store_dir/$db-Sunday*gz | awk 'NR > 2'

otherwise keep 5 backups

rm ls -t $store_dir/$db*gz | grep -v 'Sunday' | awk 'NR > 5'

echo "sync'ing $store_dir/ to $bucket_dir/..."

tailing slashes required

s3cmd sync --delete-removed "$store_dir/" "$bucket_dir/"
echo "done"

You can just set up your S3 bucket to delete things after a certain time period, no?

Aside -- great, and thanks for posting this stuff! I was really dreading the thought of DYI.

mastef commented Jan 23, 2014

Was getting errors on a few selected databases over 5MB.
Setting the host_bucket to host_bucket = %(bucket) helped, as described here :

rujmah commented Feb 21, 2014

Awesome. On the button. Just what I needed. Thanks, @oodavid

This is so great. Saves me hours compared to having to fight with WordPress backup plugins

dud3 commented Jun 28, 2014

@oodavid and @mastef thanks a lot.

sam2de commented Jul 11, 2014

Perfect tool, Thank you!


ghost commented Aug 19, 2014

am i forced to create IAM user and relative key to use this script?

mikhail commented Oct 3, 2014

Modified to create file per table and ask for password as stdin:

How can I can backup it if my database is in an external server, Google Cloud SQL?

Is there a way to update this script to stream the backup to S3, for the use-case where the DB backup is too large to fit on to the storage medium the script is being run on?

Thanks man. Neat !

Excellent. Thank you.

I've implemented a tool for this scenario in this Docker image:

You can use it like this:

$ docker run -e S3_ACCESS_KEY_ID=key -e S3_SECRET_ACCESS_KEY=secret -e S3_BUCKET=my-bucket -e S3_PREFIX=backup -e MYSQL_USER=user -e MYSQL_PASSWORD=password -e MYSQL_HOST=localhost schickling/mysql-backup-s3

How to use this if I want to only backup a single database?

Thank man ...

If you want to use it with aws cli simply change this line
s3cmd put "$tmpfile" "$object"
With this
aws s3 cp "$tmpfile" "$object"

Thank you for the script

If you want to use it with aws cli simply change this line

s3cmd put "$tmpfile" "$object"

With this

aws s3 cp "$tmpfile" "$object"

Thank you for the script

what if you have multiple DB on the server?

urshofer commented Oct 2, 2015

Probably you'll need to add a path in the script, otherwise some binaries may not be found if run via cron:


Thanks for this awesome tip and script. I can successfully run the script manually on our Ubuntu Server. However when we have it run as a cron job it does not work. Here is what it does when run as a cron job.

Output from command /bin/sh /etc/mysql/aws/ ..

. ____ . ______________________________
|/ | | |
[| �[1;31m♥ ♥�[00m |] | S3 MySQL Backup Script v.0.1 |
|_==| / © oodavid 2012 |

Dumping to �[1;32ms3://********/sql_backup/1444144678 - Tuesday 06 October 2015 @ 0817/�[00m
creating �[0;35m/tmp/1444144678 - Tuesday 06 October 2015 @ 0817 - c1uhstem.sql.gz�[00m
ERROR: /root/.s3cfg: No such file or directory
ERROR: Configuration file not available.
ERROR: Consider using --configure parameter to create one.

And here is our cronjob entry:

0 5-20 * * 1-5 /bin/sh /etc/mysql/aws/

Any help on this will be greatly appreciated.

lamasbr commented Dec 1, 2015

thank you! worked like a charm.

If anyone is having trouble getting the awesome script to work in cron, here is my change to line 44 of the file.

I had to declare the absolute path to make it work with cron:

/usr/local/bin/s3cmd --config=/home/ubuntu/.s3cfg put "$tmpfile" "$object"

Note: I used the Python install instructions found in the INSTALL file

Seriously thanks. AWS does not support reducing an RDS's allocated storage after you created the instance. This allowed me to migrate all my data and import it into a new RDS instance with a reduced allocated storage size. Thanks again.

Awesome script! Thanks a ton.

Perfect. Thank you!

Karl456 commented Feb 15, 2016

Is it possible to run this on a local server to pull from a remote MySQL server?


Change this line:

databases=mysql -u root -p$mysqlpass -e "SHOW DATABASES;" | tr -d "| " | grep -v "\(Database\|information_schema\|performance_schema\|mysql\|test\)"


databases=mysql -h -u root -p$mysqlpass -e "SHOW DATABASES;" | tr -d "| " | grep -v "\(Database\|information_schema\|performance_schema\|mysql\|test\)"

Change to whatever IP address you want in order to backup remotely. Add the same -h to the mysqldump command.

jalama commented Apr 6, 2016

modified to include the concept of a host (ie MYSLQ on another server)

Isn't working for me:

line 20: syntax error near unexpected token do line 20:for db in $databases; do

What am I doing wrong?

MrOlek commented Jun 16, 2016

The easiest way to backup your MySQL database to Amazon S3 automatically is to use this tool MySQLBackupFTP (

mbrooks commented Jun 20, 2016

It really annoys me that this isn't in an actual git project. Mainly because I doesn't really have a workflow for reporting and fixing issues.

overint commented Jul 8, 2016

Works well, thanks!

gnr5 commented Aug 2, 2016

Just tried it and it works. However what if I want to keep only the latest day in S3, instead of a file for each day? Any solution for this?

MrRaph commented Aug 5, 2016

Thanks a lot ! :-)

@gnr5 : You can create deletion rule in your bucket's properties. You'll be able to delete files some days after they were uploaded ;)

This works perfectly.

However, if you need to backup multiple servers, you can consider using our newly launched tool:

hlebarovcom commented Nov 5, 2016

Great script! I have been inspired from it to create a simple script to backup website files to Amazon S3. I've uploaded it here for everyone to use -


Awesome script! works amazingly well.

craigderington commented Feb 15, 2017

Great job. This script rocks! Edited: 5-11-2017: OK, I finally resolved "Connection Reset by Peer" error by changing the S3CMD command from PUT to SYNC. (~ line 44) Now, it works flawlessly. Thanks very much!

antisocial89 commented Mar 14, 2017

the grep was returning nothing for me... on the bright side I need to backup ALL of my databases, so I changed:

databases=mysql -u root -p$mysqlpass -e "SHOW DATABASES;" | tr -d "| " | grep -v "\(Database\|information_schema\|performance_schema\|mysql\|test\)"


databases=`mysql -u root -p$mysqlpass -e "SHOW DATABASES;" | tr -d "| "

And everything is peachy.

amrutaDesai commented Apr 12, 2017

Very nice script.
I faced the problem while running the script through crontab, denied to edit root level crontab as same is running on virtual box and without -u in below command nothing happens.
so need to edit crontab for specific user ,
sudo crontab -u specificUser -e

Hence, edited crontab for specificuser for which script should be run and it worked for me

Great script! Exactly what i needed. Thank you very much!

cdtweb commented May 13, 2017

This idea will save me a bunch of money every month vs. using AWS RDS for my MySQL server.

Awesome !!!

Saver! Thank you! 👍

Thank you. Keep up the good work!!

buluxan commented Oct 31, 2017

i got this error when i try to run that command
A client error (RequestTimeTooSkewed) occurred when calling the ListBuckets operation: The difference between the request time and the current time is too large

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment