Skip to content

Instantly share code, notes, and snippets.

@ulfmagnetics
Created March 16, 2012 00:30
Show Gist options
  • Save ulfmagnetics/2047873 to your computer and use it in GitHub Desktop.
Save ulfmagnetics/2047873 to your computer and use it in GitHub Desktop.
Copy an EC2 EBS AMI from us-west-1 to us-east-1
#!/bin/bash
# Copied in its entirety from instructions by Eric Hammond (@esh) at http://alestic.com/2010/10/ec2-ami-copy
# Thanks for a very useful post!
# Some additional notes for myself:
# - Be sure to configure the target instance's firewall to allow port 22 access from the source_ip (or from 0.0.0.0/0)
# - Be sure that ssh-agent is running and that the identity file for the imported keypair has been added to it with ssh-add
# -----------
# Setup
#
# Define the region from which we are copying the EBS boot AMI (source) and the region
# to which we are copying (target). Define the EBS boot AMI that we are copying from
# the source region.
#
# We also need to determine which ids to use in the target region for the AKI (kernel image)
# and ARI (ramdisk image). These must correspond to the AKI and ARI in the source region or
# the new AMI may not work correctly. This is probably the trickiest step of the process and one
# which is not trivial to automate for the general case.
# ulfmagnetics note: newer AMIs use AKis that do not require a ramdisk (see https://forums.aws.amazon.com/message.jspa?messageID=256534)
source_region=us-west-1
target_region=us-east-1
source_ami=ami-6d431b28 # Private AMI: Chef Bootstrap 1.04
target_aki=aki-825ea7eb
# target_ari=[ARI_ID]
# To make things easier, we’ll upload our own ssh public key to both regions.
# We could also do this with ssh keys generated by EC2, but that it is slightly
# more complex as EC2 generates unique keys for each region.
ssh_key_file=$HOME/.ssh/id_rsa
tmp_keypair=copy-ami-keypair-$$
ec2-import-keypair --region $source_region --public-key-file $ssh_key_file.pub $tmp_keypair
ec2-import-keypair --region $target_region --public-key-file $ssh_key_file.pub $tmp_keypair
# Find the Ubuntu 10.04 LTS Lucid AMI in each of our regions of interest using the REST API provided by Ubuntu.
# Pick up some required information about the EBS boot AMI we are going to copy.
instance_type=t1.micro
source_run_ami=$(wget -q -O- http://uec-images.ubuntu.com/query/lucid/server/released.current.txt |
egrep "server.release.*ebs.i386.$source_region" | cut -f8)
target_run_ami=$(wget -q -O- http://uec-images.ubuntu.com/query/lucid/server/released.current.txt |
egrep "server.release.*ebs.i386.$target_region" | cut -f8)
architecture=$(ec2-describe-images --region $source_region $source_ami | egrep ^IMAGE | cut -f8)
ami_name=$(ec2-describe-images --region $source_region $source_ami | egrep ^IMAGE | cut -f3 | cut -f2 -d/)
source_snapshot=$(ec2-describe-images --region $source_region $source_ami | egrep ^BLOCKDEVICEMAPPING | cut -f4)
ami_size=$(ec2-describe-snapshots --region $source_region $source_snapshot | egrep ^SNAPSHOT | cut -f8)
# -----------
# Image Copy
#
# Start an instance in each region. Have EC2 create a new volume from the AMI to copy and attach it to the source instance.
# Have EC2 create a new blank volume and attach it to the target instance.
dev=/dev/sdi
xvdev=/dev/sdi # On modern Ubuntu, you will need to use: xvdev=/dev/xvdi
mount=/image
source_instance=$(ec2-run-instances --region $source_region --instance-type $instance_type --key $tmp_keypair --block-device-mapping $dev=$source_snapshot::true $source_run_ami |
egrep ^INSTANCE | cut -f2)
target_instance=$(ec2-run-instances --region $target_region --instance-type $instance_type --key $tmp_keypair --block-device-mapping $dev=:$ami_size:true $target_run_ami |
egrep ^INSTANCE | cut -f2)
while ! ec2-describe-instances --region $source_region $source_instance | grep -q running; do sleep 1; done
while ! ec2-describe-instances --region $target_region $target_instance | grep -q running; do sleep 1; done
source_ip=$(ec2-describe-instances --region $source_region $source_instance | egrep "^INSTANCE" | cut -f17)
target_ip=$(ec2-describe-instances --region $target_region $target_instance | egrep "^INSTANCE" | cut -f17)
target_volume=$(ec2-describe-instances --region $target_region $target_instance | egrep "^BLOCKDEVICE.$dev" | cut -f3)
# Copy the file system from the EBS volume in the source region to the to the EBS volume in the target region.
# NOTE: The uec-rootfs file system label is required for Ubuntu 10.10 and above to boot correctly on EC2.
# It can be left off for other distributions and earlier versions of Ubuntu.
ssh -i $ssh_key_file ubuntu@$source_ip "sudo mkdir -m 000 $mount && sudo mount $xvdev $mount"
ssh -i $ssh_key_file ubuntu@$target_ip "sudo mkfs.ext3 -F -L uec-rootfs $xvdev && sudo mkdir -m 000 $mount && sudo mount $xvdev $mount"
# The following step may take a long time as the EBS volume is copied across regions...
# (This is also where I ran into issues with my identity file not being stored in ssh-agent)
ssh -A -i $ssh_key_file ubuntu@$source_ip "sudo -E rsync -PazSHAX --rsh='ssh -o \"StrictHostKeyChecking no\"' --rsync-path 'sudo rsync' $mount/ ubuntu@$target_ip:$mount/"
ssh -i $ssh_key_file ubuntu@$target_ip "sudo umount $mount"
# -----------
# AMI Creation
#
# Snapshot the target EBS volume and register it as a new AMI in the target region.
# If the source AMI included parameters like block device mappings for ephemeral storage,
# then add these options to the ec2-register command.
target_snapshot=$(ec2-create-snapshot --region $target_region $target_volume | egrep ^SNAPSHOT | cut -f2)
while ! ec2-describe-snapshots --region $target_region $target_snapshot | grep -q completed; do sleep 1; done
# ulfmagnetics note: as mentioned above, newer kernel images do not require ramdisks, so I'm omitting the "--ramdisk $target_ari" parameter here
target_ami=$(ec2-register --region $target_region --snapshot $target_snapshot --architecture $architecture --name "$ami_name" --kernel $target_aki |
cut -f2)
echo "Make a note of the new AMI id in $target_region: $target_ami"
# -----------
# Cleanup
#
# Terminate the EC2 instances that were used to copy the AMI. Since we let EC2
# create the EBS volumes on instance run, EC2 will automatically delete those
# volumes when the instances terminate. Delete the temporary keypairs we used
# to access the instances. Clean up the temporary files we created on the local system.
ec2-terminate-instances --region $source_region $source_instance
ec2-terminate-instances --region $target_region $target_instance
ec2-delete-keypair --region $source_region $tmp_keypair
ec2-delete-keypair --region $target_region $tmp_keypair
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment