Instantly share code, notes, and snippets.

@chrismdp /s3.sh
Last active Nov 29, 2018

Embed
What would you like to do?
Uploading to S3 in 18 lines of Shell (used to upload builds for http://soltrader.net)
# You don't need Fog in Ruby or some other library to upload to S3 -- shell works perfectly fine
# This is how I upload my new Sol Trader builds (http://soltrader.net)
# Based on a modified script from here: http://tmont.com/blargh/2014/1/uploading-to-s3-in-bash
S3KEY="my aws key"
S3SECRET="my aws secret" # pass these in
function putS3
{
path=$1
file=$2
aws_path=$3
bucket='my-aws-bucket'
date=$(date +"%a, %d %b %Y %T %z")
acl="x-amz-acl:public-read"
content_type='application/x-compressed-tar'
string="PUT\n\n$content_type\n$date\n$acl\n/$bucket$aws_path$file"
signature=$(echo -en "${string}" | openssl sha1 -hmac "${S3SECRET}" -binary | base64)
curl -X PUT -T "$path/$file" \
-H "Host: $bucket.s3.amazonaws.com" \
-H "Date: $date" \
-H "Content-Type: $content_type" \
-H "$acl" \
-H "Authorization: AWS ${S3KEY}:$signature" \
"https://$bucket.s3.amazonaws.com$aws_path$file"
}
for file in "$path"/*; do
putS3 "$path" "${file##*/}" "/path/on/s3/to/files/"
done
@nodesocket

This comment has been minimized.

nodesocket commented May 1, 2015

ShellCheck suggests the following. 😄

ShellCheck

Also, shameless plug, I'm the founder of https://commando.io, a web service that allows you to run scripts like this on servers (ssh) from a beautiful web-interface, on a schedule (crontab like), or via GitHub push.

@chrismdp

This comment has been minimized.

Owner

chrismdp commented May 1, 2015

Thanks - very nice. Didn't even know about ShellCheck! 👍

@chrismdp

This comment has been minimized.

Owner

chrismdp commented May 1, 2015

@nodesocket updated to fix those warnings. Thanks :)

@chrismdp

This comment has been minimized.

Owner

chrismdp commented May 1, 2015

Credit where it's due: this is originally modified from a script here include the public read ACL stuff.

@Calyhre

This comment has been minimized.

Calyhre commented May 1, 2015

Why not use a tool like s3cmd or the official aws-cli ?
You can resume your script in 1 line. And it even handle content-type for you.
And not to mention the credentials storing :)

@chrismdp

This comment has been minimized.

Owner

chrismdp commented May 1, 2015

@Calyhre because it's more than 18 lines :) nice not to have the cognitive load of another person's code to source, store and maintain over time.

@xrstf

This comment has been minimized.

xrstf commented May 1, 2015

I would argue that it's nice to have the cognitive load of handling the API be handled by another person's tested, maintained and readily-available code over time. ;-)

Still a good reference for environments where you for some reason can't install awscli.

@chrismdp

This comment has been minimized.

Owner

chrismdp commented May 1, 2015

@xrstf If the API changed a lot I'd totally agree with you. The S3 API is standard and stable though, and unlikely to need to change.

It also means I don't have to install certain libraries on my Jenkins host to make it work every (assuming curl exists of course.)

@malbin

This comment has been minimized.

malbin commented May 1, 2015

@chrismdp have you had a chance to experiment with large files (>1G)? In my experience 'curl' doesn't always hold up with big requests like that.

@chrismdp

This comment has been minimized.

Owner

chrismdp commented May 1, 2015

@malbin not yet as it's outside my use case - only uploading <100MB files.

Fair point though. When I have that requirement I'll write a wrapper to do it - "simplest thing that could work but no simpler" :)

@mietek

This comment has been minimized.

mietek commented May 1, 2015

If you like this, you may find bashmenot useful. Among other things, it includes GNU bash functions to work with S3.

Documentation:
https://bashmenot.mietek.io/reference

Source:
https://github.com/mietek/bashmenot

@fizerkhan

This comment has been minimized.

fizerkhan commented May 1, 2015

Great. It will be good if you print usage or putting top comments about how to use the script.

@matiaskorhonen

This comment has been minimized.

matiaskorhonen commented May 2, 2015

Thanks. I ended up using this as a base for a little script that I find handy: https://github.com/matiaskorhonen/shells3

@kaihendry

This comment has been minimized.

kaihendry commented May 8, 2015

Wrote a shell script to list a bucket here: https://github.com/kaihendry/s3listing/blob/master/listing.sh Be great to get your feedback.

@ORANGE-XFM

This comment has been minimized.

ORANGE-XFM commented Oct 12, 2015

maybe you need to use

date=$(TZ=utc date -R") 

to have a correct date (I did, on an alternate provider using S3 API though)

@aherve

This comment has been minimized.

aherve commented Apr 4, 2016

Thanks for sharing !

@ishanbakshi

This comment has been minimized.

ishanbakshi commented Jul 20, 2016

Hey I am getting an error like this :

< ..Code>SignatureDoesNotMatch</ Code>
<..Message >The request signature we calculated does not match the signature you provided. Check your key and signing method.</ Message>

I have checked the keys are correct, is it something to do with the signature or "https://$bucket.s3.amazonaws.com$aws_path$file" format has changed?

@marcellodesales

This comment has been minimized.

marcellodesales commented Sep 13, 2016

I adjusted the script:

  • Specify the objectName instead of the file:
  • Output the HTTP Headers

Updated Version

#!/bin/bash

objectName=test.properties
file=/home/mdesales/dev/github/spring-cloud-config-publisher/src/test/resources/app/ctg-config-with-matrix-outputTypes/application.properties-test
bucket=publisher-********
resource="/${bucket}/${objectName}"
contentType="text/plain"
dateValue=`date -R`
stringToSign="PUT\n\n${contentType}\n${dateValue}\n${resource}"
s3Key=AK*******NQ
s3Secret=xZ******************dAc
signature=`echo -en ${stringToSign} | openssl sha1 -hmac ${s3Secret} -binary | base64`
curl -v -i -X PUT -T "${file}" \
          -H "Host: ${bucket}.s3.amazonaws.com" \
          -H "Date: ${dateValue}" \
          -H "Content-Type: ${contentType}" \
          -H "Authorization: AWS ${s3Key}:${signature}" \
          https://${bucket}.s3.amazonaws.com/${objectName}

Upload

$ ./test.sh  
*   Trying 54.231.235.27...
* Connected to publisher-******.s3.amazonaws.com (54.231.235.27) port 443 (#0)
* found 173 certificates in /etc/ssl/certs/ca-certificates.crt
* found 704 certificates in /etc/ssl/certs
* ALPN, offering http/1.1
* SSL connection using TLS1.2 / ECDHE_RSA_AES_128_GCM_SHA256
*    server certificate verification OK
*    server certificate status verification SKIPPED
*    common name: *.s3.amazonaws.com (matched)
*    server certificate expiration date OK
*    server certificate activation date OK
*    certificate public key: RSA
*    certificate version: #3
*    subject: C=US,ST=Washington,L=Seattle,O=Amazon.com Inc.,CN=*.s3.amazonaws.com
*    start date: Fri, 29 Jul 2016 00:00:00 GMT
*    expire date: Wed, 29 Nov 2017 12:00:00 GMT
*    issuer: C=US,O=DigiCert Inc,OU=www.digicert.com,CN=DigiCert Baltimore CA-2 G2
*    compression: NULL
* ALPN, server did not agree to a protocol
> PUT /test.properties HTTP/1.1
> Host: publisher-*******.s3.amazonaws.com
> User-Agent: curl/7.43.0
> Accept: */*
> Date: Tue, 13 Sep 2016 13:43:44 -0700
> Content-Type: text/plain
> Authorization: AWS AKI*********Q:/jIS***********Zo=
> Content-Length: 85
> Expect: 100-continue
> 
< HTTP/1.1 100 Continue
HTTP/1.1 100 Continue

* We are completely uploaded and fine
< HTTP/1.1 200 OK
HTTP/1.1 200 OK
< x-amz-id-2: dgYk9H7MCGcO1adUzjXVtncD4QJsmNIpLH1wk1zSocaZwXUZd+jh4qfcJgYP8ZR1jK1zovp5RbY=
x-amz-id-2: dgYk9H7MCGcO1adUzjXVtncD4QJsmNIpLH1wk1zSocaZwXUZd+jh4qfcJgYP8ZR1jK1zovp5RbY=
< x-amz-request-id: F8FBC7B473A9C170
x-amz-request-id: F8FBC7B473A9C170
< Date: Tue, 13 Sep 2016 20:43:46 GMT
Date: Tue, 13 Sep 2016 20:43:46 GMT
< ETag: "cff6f9808ba6a73905e78168d7df65e9"
ETag: "cff6f9808ba6a73905e78168d7df65e9"
< Content-Length: 0
Content-Length: 0
< Server: AmazonS3
Server: AmazonS3
@Boomser13

This comment has been minimized.

Boomser13 commented Nov 18, 2016

Hi chrismdp,

I ran the same script in my environment it is giving out an error.
SignatureDoesNotMatchThe request signature we calculated does not match the signature you provided. Check your key and signing method.AKI*****************LQPUT

Can you please help me with that..
I am also trying to create a download shell script as well if you have any information regarding that do let me know.

Thanks In Advance.
:) 👍

@oystersauce8

This comment has been minimized.

oystersauce8 commented Feb 8, 2017

bro @Boomser13, I was in the same boat. I was missing "#!/bin/bash" at the top of the script, it started working when I added it.
(sh and bash are different it seems, and sometimes when not specified it defaults to sh)

@nmcgann

This comment has been minimized.

nmcgann commented Apr 2, 2017

Updated to handle region codes and storage classes (acl and content type could also be parameterised):

#S3 parameters
S3KEY="my-key"
S3SECRET="my-secret"
S3BUCKET="my-bucket"
S3STORAGETYPE="STANDARD" #REDUCED_REDUNDANCY or STANDARD etc.
AWSREGION="s3-xxxxxx"

function putS3
{
  path=$1
  file=$2
  aws_path=$3
  bucket="${S3BUCKET}"
  date=$(date +"%a, %d %b %Y %T %z")
  acl="x-amz-acl:private"
  content_type="application/octet-stream"
  storage_type="x-amz-storage-class:${S3STORAGETYPE}"
  string="PUT\n\n$content_type\n$date\n$acl\n$storage_type\n/$bucket$aws_path$file"
  signature=$(echo -en "${string}" | openssl sha1 -hmac "${S3SECRET}" -binary | base64)
  curl -s -X PUT -T "$path/$file" \
    -H "Host: $bucket.${AWSREGION}.amazonaws.com" \
    -H "Date: $date" \
    -H "Content-Type: $content_type" \
    -H "$storage_type" \
    -H "$acl" \
    -H "Authorization: AWS ${S3KEY}:$signature" \
    "https://$bucket.${AWSREGION}.amazonaws.com$aws_path$file"
}

@gonzalo-trenco

This comment has been minimized.

gonzalo-trenco commented May 26, 2017

Extending @ORANGE-XFM response and following this docs
If executing this script from MacOS bash you need to set date like this
date=$(TZ=utc date +"%Y%m%dT%H%M%SZ")
in order to get rid of the bad date format errors

@miznokruge

This comment has been minimized.

miznokruge commented Jun 22, 2017

Hi.
I use your script, but got an error message :

The authorization mechanism you have provided is not supported. Please use AWS4-HMAC-SHA256.

Any of you guys got message like this?

cc: @chrismdp

@rosencreuz

This comment has been minimized.

rosencreuz commented Jun 22, 2017

There's a problem with the locale. It works fine with en_US.utf8 locale but doesn't work with other languages because of the date string. I think it's better if there's an LC_TIME=en_US.utf8 somewhere in the script.

@aguero700

This comment has been minimized.

aguero700 commented Jun 29, 2017

is there anyone that knows where i can get shells ?

@i1skn

This comment has been minimized.

i1skn commented Jul 26, 2017

@miznokruge I guess you are using one of the new regions after January 30, 2014. From http://docs.aws.amazon.com/AmazonS3/latest/dev/RESTAuthentication.html

This latest signature version is supported in all regions and any new regions after January 30, 2014 will support only Signature Version 4. For more information, go to Authenticating Requests (AWS Signature Version 4) in the Amazon Simple Storage Service API Reference.

This script using V2 API version, so to use new regions after January 30, 2014 you will need support v4 standard.

@liberodark

This comment has been minimized.

liberodark commented May 25, 2018

How to configure this with wasabi ?

@ChestersGarage

This comment has been minimized.

ChestersGarage commented Aug 7, 2018

Stoked to come across this. We are using it to upload files from AIX servers, which doesn't support (read: admins won't install) the AWS cli and related stuff. Thanks!

@pkhetan

This comment has been minimized.

pkhetan commented Sep 24, 2018

My company is using rook for s3 storage. So I have different base URL for my bucket. "http://abc.rook.com" like this.
so if I create a new bucket with let say "piyush" and put some object inside "test" then my URL becomes like this "http://abc.rook.com/piyush/test/abc.txt". I am able to put object using s3api but not with above curl script.
Please suggest what change I need to do.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment