Create a gist now

Instantly share code, notes, and snippets.

@chrismdp /s3.sh
Last active Aug 21, 2017

What would you like to do?
Uploading to S3 in 18 lines of Shell (used to upload builds for http://soltrader.net)
# You don't need Fog in Ruby or some other library to upload to S3 -- shell works perfectly fine
# This is how I upload my new Sol Trader builds (http://soltrader.net)
# Based on a modified script from here: http://tmont.com/blargh/2014/1/uploading-to-s3-in-bash
S3KEY="my aws key"
S3SECRET="my aws secret" # pass these in
function putS3
{
path=$1
file=$2
aws_path=$3
bucket='my-aws-bucket'
date=$(date +"%a, %d %b %Y %T %z")
acl="x-amz-acl:public-read"
content_type='application/x-compressed-tar'
string="PUT\n\n$content_type\n$date\n$acl\n/$bucket$aws_path$file"
signature=$(echo -en "${string}" | openssl sha1 -hmac "${S3SECRET}" -binary | base64)
curl -X PUT -T "$path/$file" \
-H "Host: $bucket.s3.amazonaws.com" \
-H "Date: $date" \
-H "Content-Type: $content_type" \
-H "$acl" \
-H "Authorization: AWS ${S3KEY}:$signature" \
"https://$bucket.s3.amazonaws.com$aws_path$file"
}
for file in "$path"/*; do
putS3 "$path" "${file##*/}" "/path/on/s3/to/files/"
done

ShellCheck suggests the following. 😄

ShellCheck

Also, shameless plug, I'm the founder of https://commando.io, a web service that allows you to run scripts like this on servers (ssh) from a beautiful web-interface, on a schedule (crontab like), or via GitHub push.

Owner

chrismdp commented May 1, 2015

Thanks - very nice. Didn't even know about ShellCheck! 👍

Owner

chrismdp commented May 1, 2015

@nodesocket updated to fix those warnings. Thanks :)

Owner

chrismdp commented May 1, 2015

Credit where it's due: this is originally modified from a script here include the public read ACL stuff.

Calyhre commented May 1, 2015

Why not use a tool like s3cmd or the official aws-cli ?
You can resume your script in 1 line. And it even handle content-type for you.
And not to mention the credentials storing :)

Owner

chrismdp commented May 1, 2015

@Calyhre because it's more than 18 lines :) nice not to have the cognitive load of another person's code to source, store and maintain over time.

xrstf commented May 1, 2015

I would argue that it's nice to have the cognitive load of handling the API be handled by another person's tested, maintained and readily-available code over time. ;-)

Still a good reference for environments where you for some reason can't install awscli.

Owner

chrismdp commented May 1, 2015

@xrstf If the API changed a lot I'd totally agree with you. The S3 API is standard and stable though, and unlikely to need to change.

It also means I don't have to install certain libraries on my Jenkins host to make it work every (assuming curl exists of course.)

malbin commented May 1, 2015

@chrismdp have you had a chance to experiment with large files (>1G)? In my experience 'curl' doesn't always hold up with big requests like that.

Owner

chrismdp commented May 1, 2015

@malbin not yet as it's outside my use case - only uploading <100MB files.

Fair point though. When I have that requirement I'll write a wrapper to do it - "simplest thing that could work but no simpler" :)

mietek commented May 1, 2015

If you like this, you may find bashmenot useful. Among other things, it includes GNU bash functions to work with S3.

Documentation:
https://bashmenot.mietek.io/reference

Source:
https://github.com/mietek/bashmenot

Great. It will be good if you print usage or putting top comments about how to use the script.

Thanks. I ended up using this as a base for a little script that I find handy: https://github.com/matiaskorhonen/shells3

Wrote a shell script to list a bucket here: https://github.com/kaihendry/s3listing/blob/master/listing.sh Be great to get your feedback.

maybe you need to use

date=$(TZ=utc date -R") 

to have a correct date (I did, on an alternate provider using S3 API though)

aherve commented Apr 4, 2016

Thanks for sharing !

ishanbakshi commented Jul 20, 2016 edited

Hey I am getting an error like this :

< ..Code>SignatureDoesNotMatch</ Code>
<..Message >The request signature we calculated does not match the signature you provided. Check your key and signing method.</ Message>

I have checked the keys are correct, is it something to do with the signature or "https://$bucket.s3.amazonaws.com$aws_path$file" format has changed?

I adjusted the script:

  • Specify the objectName instead of the file:
  • Output the HTTP Headers

Updated Version

#!/bin/bash

objectName=test.properties
file=/home/mdesales/dev/github/spring-cloud-config-publisher/src/test/resources/app/ctg-config-with-matrix-outputTypes/application.properties-test
bucket=publisher-********
resource="/${bucket}/${objectName}"
contentType="text/plain"
dateValue=`date -R`
stringToSign="PUT\n\n${contentType}\n${dateValue}\n${resource}"
s3Key=AK*******NQ
s3Secret=xZ******************dAc
signature=`echo -en ${stringToSign} | openssl sha1 -hmac ${s3Secret} -binary | base64`
curl -v -i -X PUT -T "${file}" \
          -H "Host: ${bucket}.s3.amazonaws.com" \
          -H "Date: ${dateValue}" \
          -H "Content-Type: ${contentType}" \
          -H "Authorization: AWS ${s3Key}:${signature}" \
          https://${bucket}.s3.amazonaws.com/${objectName}

Upload

$ ./test.sh  
*   Trying 54.231.235.27...
* Connected to publisher-******.s3.amazonaws.com (54.231.235.27) port 443 (#0)
* found 173 certificates in /etc/ssl/certs/ca-certificates.crt
* found 704 certificates in /etc/ssl/certs
* ALPN, offering http/1.1
* SSL connection using TLS1.2 / ECDHE_RSA_AES_128_GCM_SHA256
*    server certificate verification OK
*    server certificate status verification SKIPPED
*    common name: *.s3.amazonaws.com (matched)
*    server certificate expiration date OK
*    server certificate activation date OK
*    certificate public key: RSA
*    certificate version: #3
*    subject: C=US,ST=Washington,L=Seattle,O=Amazon.com Inc.,CN=*.s3.amazonaws.com
*    start date: Fri, 29 Jul 2016 00:00:00 GMT
*    expire date: Wed, 29 Nov 2017 12:00:00 GMT
*    issuer: C=US,O=DigiCert Inc,OU=www.digicert.com,CN=DigiCert Baltimore CA-2 G2
*    compression: NULL
* ALPN, server did not agree to a protocol
> PUT /test.properties HTTP/1.1
> Host: publisher-*******.s3.amazonaws.com
> User-Agent: curl/7.43.0
> Accept: */*
> Date: Tue, 13 Sep 2016 13:43:44 -0700
> Content-Type: text/plain
> Authorization: AWS AKI*********Q:/jIS***********Zo=
> Content-Length: 85
> Expect: 100-continue
> 
< HTTP/1.1 100 Continue
HTTP/1.1 100 Continue

* We are completely uploaded and fine
< HTTP/1.1 200 OK
HTTP/1.1 200 OK
< x-amz-id-2: dgYk9H7MCGcO1adUzjXVtncD4QJsmNIpLH1wk1zSocaZwXUZd+jh4qfcJgYP8ZR1jK1zovp5RbY=
x-amz-id-2: dgYk9H7MCGcO1adUzjXVtncD4QJsmNIpLH1wk1zSocaZwXUZd+jh4qfcJgYP8ZR1jK1zovp5RbY=
< x-amz-request-id: F8FBC7B473A9C170
x-amz-request-id: F8FBC7B473A9C170
< Date: Tue, 13 Sep 2016 20:43:46 GMT
Date: Tue, 13 Sep 2016 20:43:46 GMT
< ETag: "cff6f9808ba6a73905e78168d7df65e9"
ETag: "cff6f9808ba6a73905e78168d7df65e9"
< Content-Length: 0
Content-Length: 0
< Server: AmazonS3
Server: AmazonS3

Hi chrismdp,

I ran the same script in my environment it is giving out an error.
SignatureDoesNotMatchThe request signature we calculated does not match the signature you provided. Check your key and signing method.AKI*****************LQPUT

Can you please help me with that..
I am also trying to create a download shell script as well if you have any information regarding that do let me know.

Thanks In Advance.
:) 👍

bro @Boomser13, I was in the same boat. I was missing "#!/bin/bash" at the top of the script, it started working when I added it.
(sh and bash are different it seems, and sometimes when not specified it defaults to sh)

nmcgann commented Apr 2, 2017

Updated to handle region codes and storage classes (acl and content type could also be parameterised):

#S3 parameters
S3KEY="my-key"
S3SECRET="my-secret"
S3BUCKET="my-bucket"
S3STORAGETYPE="STANDARD" #REDUCED_REDUNDANCY or STANDARD etc.
AWSREGION="s3-xxxxxx"

function putS3
{
  path=$1
  file=$2
  aws_path=$3
  bucket="${S3BUCKET}"
  date=$(date +"%a, %d %b %Y %T %z")
  acl="x-amz-acl:private"
  content_type="application/octet-stream"
  storage_type="x-amz-storage-class:${S3STORAGETYPE}"
  string="PUT\n\n$content_type\n$date\n$acl\n$storage_type\n/$bucket$aws_path$file"
  signature=$(echo -en "${string}" | openssl sha1 -hmac "${S3SECRET}" -binary | base64)
  curl -s -X PUT -T "$path/$file" \
    -H "Host: $bucket.${AWSREGION}.amazonaws.com" \
    -H "Date: $date" \
    -H "Content-Type: $content_type" \
    -H "$storage_type" \
    -H "$acl" \
    -H "Authorization: AWS ${S3KEY}:$signature" \
    "https://$bucket.${AWSREGION}.amazonaws.com$aws_path$file"
}

Extending @ORANGE-XFM response and following this docs
If executing this script from MacOS bash you need to set date like this
date=$(TZ=utc date +"%Y%m%dT%H%M%SZ")
in order to get rid of the bad date format errors

miznokruge commented Jun 22, 2017 edited

Hi.
I use your script, but got an error message :

The authorization mechanism you have provided is not supported. Please use AWS4-HMAC-SHA256.

Any of you guys got message like this?

cc: @chrismdp

There's a problem with the locale. It works fine with en_US.utf8 locale but doesn't work with other languages because of the date string. I think it's better if there's an LC_TIME=en_US.utf8 somewhere in the script.

is there anyone that knows where i can get shells ?

i1skn commented Jul 26, 2017

@miznokruge I guess you are using one of the new regions after January 30, 2014. From http://docs.aws.amazon.com/AmazonS3/latest/dev/RESTAuthentication.html

This latest signature version is supported in all regions and any new regions after January 30, 2014 will support only Signature Version 4. For more information, go to Authenticating Requests (AWS Signature Version 4) in the Amazon Simple Storage Service API Reference.

This script using V2 API version, so to use new regions after January 30, 2014 you will need support v4 standard.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment