Skip to content

Instantly share code, notes, and snippets.

@chrismdp
Last active March 5, 2024 12:57
Show Gist options
  • Save chrismdp/6c6b6c825b07f680e710 to your computer and use it in GitHub Desktop.
Save chrismdp/6c6b6c825b07f680e710 to your computer and use it in GitHub Desktop.
Uploading to S3 in 18 lines of Shell (used to upload builds for http://soltrader.net)
# You don't need Fog in Ruby or some other library to upload to S3 -- shell works perfectly fine
# This is how I upload my new Sol Trader builds (http://soltrader.net)
# Based on a modified script from here: http://tmont.com/blargh/2014/1/uploading-to-s3-in-bash
S3KEY="my aws key"
S3SECRET="my aws secret" # pass these in
function putS3
{
path=$1
file=$2
aws_path=$3
bucket='my-aws-bucket'
date=$(date +"%a, %d %b %Y %T %z")
acl="x-amz-acl:public-read"
content_type='application/x-compressed-tar'
string="PUT\n\n$content_type\n$date\n$acl\n/$bucket$aws_path$file"
signature=$(echo -en "${string}" | openssl sha1 -hmac "${S3SECRET}" -binary | base64)
curl -X PUT -T "$path/$file" \
-H "Host: $bucket.s3.amazonaws.com" \
-H "Date: $date" \
-H "Content-Type: $content_type" \
-H "$acl" \
-H "Authorization: AWS ${S3KEY}:$signature" \
"https://$bucket.s3.amazonaws.com$aws_path$file"
}
for file in "$path"/*; do
putS3 "$path" "${file##*/}" "/path/on/s3/to/files/"
done
@xrstf
Copy link

xrstf commented May 1, 2015

I would argue that it's nice to have the cognitive load of handling the API be handled by another person's tested, maintained and readily-available code over time. ;-)

Still a good reference for environments where you for some reason can't install awscli.

@chrismdp
Copy link
Author

chrismdp commented May 1, 2015

@xrstf If the API changed a lot I'd totally agree with you. The S3 API is standard and stable though, and unlikely to need to change.

It also means I don't have to install certain libraries on my Jenkins host to make it work every (assuming curl exists of course.)

@malbin
Copy link

malbin commented May 1, 2015

@chrismdp have you had a chance to experiment with large files (>1G)? In my experience 'curl' doesn't always hold up with big requests like that.

@chrismdp
Copy link
Author

chrismdp commented May 1, 2015

@malbin not yet as it's outside my use case - only uploading <100MB files.

Fair point though. When I have that requirement I'll write a wrapper to do it - "simplest thing that could work but no simpler" :)

@mietek
Copy link

mietek commented May 1, 2015

If you like this, you may find bashmenot useful. Among other things, it includes GNU bash functions to work with S3.

Documentation:
https://bashmenot.mietek.io/reference

Source:
https://github.com/mietek/bashmenot

@fizerkhan
Copy link

Great. It will be good if you print usage or putting top comments about how to use the script.

@matiaskorhonen
Copy link

Thanks. I ended up using this as a base for a little script that I find handy: https://github.com/matiaskorhonen/shells3

@kaihendry
Copy link

Wrote a shell script to list a bucket here: https://github.com/kaihendry/s3listing/blob/master/listing.sh Be great to get your feedback.

@xfmoulet
Copy link

maybe you need to use

date=$(TZ=utc date -R") 

to have a correct date (I did, on an alternate provider using S3 API though)

@aherve
Copy link

aherve commented Apr 4, 2016

Thanks for sharing !

@ishanbakshi
Copy link

ishanbakshi commented Jul 20, 2016

Hey I am getting an error like this :

< ..Code>SignatureDoesNotMatch</ Code>
<..Message >The request signature we calculated does not match the signature you provided. Check your key and signing method.</ Message>

I have checked the keys are correct, is it something to do with the signature or "https://$bucket.s3.amazonaws.com$aws_path$file" format has changed?

@marcellodesales
Copy link

I adjusted the script:

  • Specify the objectName instead of the file:
  • Output the HTTP Headers

Updated Version

#!/bin/bash

objectName=test.properties
file=/home/mdesales/dev/github/spring-cloud-config-publisher/src/test/resources/app/ctg-config-with-matrix-outputTypes/application.properties-test
bucket=publisher-********
resource="/${bucket}/${objectName}"
contentType="text/plain"
dateValue=`date -R`
stringToSign="PUT\n\n${contentType}\n${dateValue}\n${resource}"
s3Key=AK*******NQ
s3Secret=xZ******************dAc
signature=`echo -en ${stringToSign} | openssl sha1 -hmac ${s3Secret} -binary | base64`
curl -v -i -X PUT -T "${file}" \
          -H "Host: ${bucket}.s3.amazonaws.com" \
          -H "Date: ${dateValue}" \
          -H "Content-Type: ${contentType}" \
          -H "Authorization: AWS ${s3Key}:${signature}" \
          https://${bucket}.s3.amazonaws.com/${objectName}

Upload

$ ./test.sh  
*   Trying 54.231.235.27...
* Connected to publisher-******.s3.amazonaws.com (54.231.235.27) port 443 (#0)
* found 173 certificates in /etc/ssl/certs/ca-certificates.crt
* found 704 certificates in /etc/ssl/certs
* ALPN, offering http/1.1
* SSL connection using TLS1.2 / ECDHE_RSA_AES_128_GCM_SHA256
*    server certificate verification OK
*    server certificate status verification SKIPPED
*    common name: *.s3.amazonaws.com (matched)
*    server certificate expiration date OK
*    server certificate activation date OK
*    certificate public key: RSA
*    certificate version: #3
*    subject: C=US,ST=Washington,L=Seattle,O=Amazon.com Inc.,CN=*.s3.amazonaws.com
*    start date: Fri, 29 Jul 2016 00:00:00 GMT
*    expire date: Wed, 29 Nov 2017 12:00:00 GMT
*    issuer: C=US,O=DigiCert Inc,OU=www.digicert.com,CN=DigiCert Baltimore CA-2 G2
*    compression: NULL
* ALPN, server did not agree to a protocol
> PUT /test.properties HTTP/1.1
> Host: publisher-*******.s3.amazonaws.com
> User-Agent: curl/7.43.0
> Accept: */*
> Date: Tue, 13 Sep 2016 13:43:44 -0700
> Content-Type: text/plain
> Authorization: AWS AKI*********Q:/jIS***********Zo=
> Content-Length: 85
> Expect: 100-continue
> 
< HTTP/1.1 100 Continue
HTTP/1.1 100 Continue

* We are completely uploaded and fine
< HTTP/1.1 200 OK
HTTP/1.1 200 OK
< x-amz-id-2: dgYk9H7MCGcO1adUzjXVtncD4QJsmNIpLH1wk1zSocaZwXUZd+jh4qfcJgYP8ZR1jK1zovp5RbY=
x-amz-id-2: dgYk9H7MCGcO1adUzjXVtncD4QJsmNIpLH1wk1zSocaZwXUZd+jh4qfcJgYP8ZR1jK1zovp5RbY=
< x-amz-request-id: F8FBC7B473A9C170
x-amz-request-id: F8FBC7B473A9C170
< Date: Tue, 13 Sep 2016 20:43:46 GMT
Date: Tue, 13 Sep 2016 20:43:46 GMT
< ETag: "cff6f9808ba6a73905e78168d7df65e9"
ETag: "cff6f9808ba6a73905e78168d7df65e9"
< Content-Length: 0
Content-Length: 0
< Server: AmazonS3
Server: AmazonS3

@Boomser13
Copy link

Hi chrismdp,

I ran the same script in my environment it is giving out an error.
SignatureDoesNotMatchThe request signature we calculated does not match the signature you provided. Check your key and signing method.AKI*****************LQPUT

Can you please help me with that..
I am also trying to create a download shell script as well if you have any information regarding that do let me know.

Thanks In Advance.
:) 👍

@oystersauce8
Copy link

bro @Boomser13, I was in the same boat. I was missing "#!/bin/bash" at the top of the script, it started working when I added it.
(sh and bash are different it seems, and sometimes when not specified it defaults to sh)

@nmcgann
Copy link

nmcgann commented Apr 2, 2017

Updated to handle region codes and storage classes (acl and content type could also be parameterised):

#S3 parameters
S3KEY="my-key"
S3SECRET="my-secret"
S3BUCKET="my-bucket"
S3STORAGETYPE="STANDARD" #REDUCED_REDUNDANCY or STANDARD etc.
AWSREGION="s3-xxxxxx"

function putS3
{
  path=$1
  file=$2
  aws_path=$3
  bucket="${S3BUCKET}"
  date=$(date +"%a, %d %b %Y %T %z")
  acl="x-amz-acl:private"
  content_type="application/octet-stream"
  storage_type="x-amz-storage-class:${S3STORAGETYPE}"
  string="PUT\n\n$content_type\n$date\n$acl\n$storage_type\n/$bucket$aws_path$file"
  signature=$(echo -en "${string}" | openssl sha1 -hmac "${S3SECRET}" -binary | base64)
  curl -s -X PUT -T "$path/$file" \
    -H "Host: $bucket.${AWSREGION}.amazonaws.com" \
    -H "Date: $date" \
    -H "Content-Type: $content_type" \
    -H "$storage_type" \
    -H "$acl" \
    -H "Authorization: AWS ${S3KEY}:$signature" \
    "https://$bucket.${AWSREGION}.amazonaws.com$aws_path$file"
}

@gonzalo-trenco
Copy link

Extending @ORANGE-XFM response and following this docs
If executing this script from MacOS bash you need to set date like this
date=$(TZ=utc date +"%Y%m%dT%H%M%SZ")
in order to get rid of the bad date format errors

@miznokruge
Copy link

miznokruge commented Jun 22, 2017

Hi.
I use your script, but got an error message :

The authorization mechanism you have provided is not supported. Please use AWS4-HMAC-SHA256.

Any of you guys got message like this?

cc: @chrismdp

@rosencreuz
Copy link

There's a problem with the locale. It works fine with en_US.utf8 locale but doesn't work with other languages because of the date string. I think it's better if there's an LC_TIME=en_US.utf8 somewhere in the script.

@aguero700
Copy link

is there anyone that knows where i can get shells ?

@i1skn
Copy link

i1skn commented Jul 26, 2017

@miznokruge I guess you are using one of the new regions after January 30, 2014. From http://docs.aws.amazon.com/AmazonS3/latest/dev/RESTAuthentication.html

This latest signature version is supported in all regions and any new regions after January 30, 2014 will support only Signature Version 4. For more information, go to Authenticating Requests (AWS Signature Version 4) in the Amazon Simple Storage Service API Reference.

This script using V2 API version, so to use new regions after January 30, 2014 you will need support v4 standard.

@liberodark
Copy link

How to configure this with wasabi ?

@ChestersGarage
Copy link

Stoked to come across this. We are using it to upload files from AIX servers, which doesn't support (read: admins won't install) the AWS cli and related stuff. Thanks!

@pkhetan
Copy link

pkhetan commented Sep 24, 2018

My company is using rook for s3 storage. So I have different base URL for my bucket. "http://abc.rook.com" like this.
so if I create a new bucket with let say "piyush" and put some object inside "test" then my URL becomes like this "http://abc.rook.com/piyush/test/abc.txt". I am able to put object using s3api but not with above curl script.
Please suggest what change I need to do.

@atonamy
Copy link

atonamy commented Aug 21, 2020

Hi chrismdp,

I ran the same script in my environment it is giving out an error.
SignatureDoesNotMatchThe request signature we calculated does not match the signature you provided. Check your key and signing method.AKI*****************LQPUT

Can you please help me with that..
I am also trying to create a download shell script as well if you have any information regarding that do let me know.

Thanks In Advance.
:) 👍

I have the same problem

The request signature we calculated does not match the signature you provided

How to fix that?

@micaelomota
Copy link

micaelomota commented Sep 14, 2020

For those with root access, just install awscli and be happy doing that in only one line https://aws.amazon.com/getting-started/hands-on/backup-to-s3-cli/

@sanlodhi
Copy link

How can I test my connection to S3 bucket

@sanlodhi
Copy link

sanlodhi commented Oct 29, 2020

Hi bro ,may you please help with.....

How do I check my connection to S3 is successful or not?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment