Created
August 9, 2012 21:52
-
-
Save ericchernuka/3308372 to your computer and use it in GitHub Desktop.
Replication of a file from the default US region into an Asia bucket.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
#!/usr/bin/env ruby | |
require 'rubygems' | |
require 'fileutils' | |
require 'date' | |
require 'chronic' | |
require 'fog' | |
#Set the default directory for relative paths. | |
Dir.chdir(File.expand_path(File.dirname(__FILE__))) | |
# Import EC2 credentials e.g. @aws_access_key_id and @aws_access_key_id | |
require './s3_credentials.rb' | |
## S3 Upload | |
# Create a connection | |
connection = Fog::Storage.new({ | |
:provider => 'AWS', | |
:aws_access_key_id => @aws_access_key_id, | |
:aws_secret_access_key => @aws_secret_access_key | |
}) | |
asia_connection = Fog::Storage.new({ | |
:provider => 'AWS', | |
:aws_access_key_id => @aws_access_key_id, | |
:aws_secret_access_key => @aws_secret_access_key | |
:region => "ap-southeast-1" | |
}) | |
new_file = connection.copy_object("csi-testbucket-eric", "calgary.pdf","csi-testbucket-eric-asia","calgary.pdf", {'x-amz-acl' => 'public-read'}) | |
new_new_file = asia_connection.directories.get("csi-testbucket-eric-asia").files.get("calgary.pdf") | |
puts new_new_file.public_url | |
OUTPUT | |
[WARNING] fog: followed redirect to csi-testbucket-eric-asia.s3-ap-southeast-1.amazonaws.com, connecting to the matching region will be more performant |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
This one is tricky for sure. The first optimization/simplification is that copy_object is run against the target (not the source). So you should actually be able to remove the first connection part altogether and use the asia_connection throughout. This should remove the warning and avoid the redirect which should make things run a bit faster.
You can optimize #31 by assuming that the bucket and file exist (get will actually look them up, new assumes) like so:
Public url should then still work as expected, but run faster.
Hope that helps explain, both are pretty fine points but they should have some impact.