The AWS CLI can be used for downloading recent database backups from S3. | |
First, you need to install it, on OS X, you can do `brew install awscli`, otherwise you can follow the [official documentation](https://docs.aws.amazon.com/cli/latest/userguide/cli-install-macos.html). | |
Confirm it's working by running `aws --version`. | |
Next, setup a new profile by running `aws configure --profile [PROFILE_NAME]` and enter the `AWS Access Key ID`, `AWS Secret Access Key`, `Default region name` and `Default output format` (`text`). | |
To see a list of files for a particular date, run: | |
`aws --profile [PROFILE_NAME] s3 ls s3://[BUCKET_NAME]/[PATH_TO_YOUR_BACKUP]/` | |
To download a backup from S3, you can use the `cp` command: | |
`aws --profile [PROFILE_NAME] s3 cp s3://[BUCKET_NAME]/[PATH_TO_YOUR_BACKUP]/[FILENAME] ~/[YOUR_LOCAL_PATH]/[FILENAME]` | |
To import the downloaded backup, you can then do (assuming it's a `.gz` archive): | |
`zcat < ~/[YOUR_LOCAL_PATH]/[FILENAME] | mysql -h [MYSQL_IP] -u [USERNAME] -p[PASSWORD] [DATABASE]` | |
The lack of space between `-p` and the password is intentional, without it the command won't work. | |
`localhost` is probably fine for the hostname, but in this case, it's the IP of my local docker mysql container. | |
You could then stick this in your crontab to download and import a daily copy of your production database. |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment