Skip to content

Instantly share code, notes, and snippets.

@plindberg
Created August 5, 2017 16:46
Show Gist options
  • Star 37 You must be signed in to star a gist
  • Fork 6 You must be signed in to fork a gist
  • Save plindberg/bdd3bfc198d0c2db9474710a69ad1066 to your computer and use it in GitHub Desktop.
Save plindberg/bdd3bfc198d0c2db9474710a69ad1066 to your computer and use it in GitHub Desktop.
How to set up an AWS Lambda function for returning S3 pre-signed URLs for uploading files.

README

Granted, this is little more than an obfuscated way of having a publicly writable S3 bucket, but if you don’t have a server which can pre-sign URLs for you, this might be an acceptable solution.

For this to work, you take the following steps:

  1. Create a Lambda func, along with a new IAM role, keeping the default code.
  2. Create an API in the API Gateway.
  3. Create a resource in said API.
  4. Create a POST method for that API resource, pointing it to the above Lambda func.
  5. Deploy the API to a new stage.
  6. Verify that you can call the Lambda func using curl followed by the URL shown for the stage, resource, and method.
  7. Create an S3 bucket if you haven’t already, and one or more folders you want to upload to.
  8. Add a bucket policy granting public read access to those folders (a sample is included in this gist).
  9. Create an IAM policy granting write permissions in those folders (also included).
  10. Attach this policy to the IAM role created above.
  11. Update the Lambda func with the code in this gist.
  12. Add a configuration variable named s3_bucket with the name of your S3 bucket.
  13. Verify that you can call it using curl followed by the same URL as previously, followed by the parameters --request POST --header 'Content-Type: application/json --data '{"object_key": "folder/filename.ext"}'
  14. Finally, verify that you can upload a file to the URL returned, using curl --upload-file followed by the path to some file and the URL received in the previous step.

That’s it!

'use strict';
const AWS = require('aws-sdk');
const s3 = new AWS.S3({signatureVersion: 'v4'});
exports.handler = (event, context, callback) => {
const bucket = process.env['s3_bucket'];
if (!bucket) {
callback(new Error(`S3 bucket not set`));
}
const key = event['object_key'];
if (!key) {
callback(new Error('S3 object key missing'));
return;
}
const params = {'Bucket': bucket, 'Key': key};
s3.getSignedUrl('putObject', params, (error, url) => {
if (error) {
callback(error);
} else {
callback(null, {url: url});
}
});
};
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"s3:PutObject"
],
"Effect": "Allow",
"Resource": [
"arn:aws:s3:::mybucket/folder1/*"
]
},
{
"Action": [
"s3:PutObject"
],
"Effect": "Allow",
"Resource": [
"arn:aws:s3:::mybucket/folder2/*"
]
}
]
}
{
"Version": "2008-10-17",
"Statement": [
{
"Sid": "AllowPublicRead",
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::mybucket/folder1/*"
},
{
"Sid": "AllowPublicRead",
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::mybucket/folder2/*"
}
]
}
@andreafalzetti
Copy link

andreafalzetti commented May 29, 2018

Hi, thanks for this!

I've created the lambda and the bucket as specified. The lambda returns the pre-signed URL, which I'm using in the following request which fails with a 403:

<?xml version="1.0" encoding="UTF-8"?>
<Error>
    <Code>AccessDenied</Code>
    <Message>Access Denied</Message>
    <RequestId>DFEADD09BBEC7AF5</RequestId>
    <HostId>PKsGfBduOrHRPtkc/VVG5mbstj3cqyscGMu/afuDBfl/7BgFmAuWIjwUW3KILGXcG2WEnqW7Knk=</HostId>
</Error>

Any idea why?

@andreafalzetti
Copy link

Update! My bad, during tests, I renamed the bucket and the IAM policy wasn't up to date. Interesting is that the Lambda will still create the pre-signed URL but that URL won't allow you to write into the S3 bucket. Hope it helps.

@0100110110110010
Copy link

0100110110110010 commented Dec 4, 2019

Thank You for the great summary! Set up of this large file upload was easy due to this gist and avoids API Gateway restrictions.

Some thoughts:
If the "-i" argument is added to the upload file cURL command, the S3 upload return values are displayed - including the "ETAG" which is the MD5 checksum of the just uploaded file. By this checking a correct upload up to S3 bucket can be done. I found this helpful.

The s3 bucket policy allows public access to be able to download the uploaded files directly. For "upload only" use cases and further processing of the uploaded data this is not required. I did my set up with S3 bucket setting to block every public access and this works properly.

cURL Windows 10 compatibility: The ' ' markers on command line didn't work, use " " instead like

--data "{\"object_key\": \"upload/myfile.bin\"}"

simple way to check md5 checksum on command line if You haven't a java CLI yet
Ubuntu:

md5sum <filename>

Win10:

certutil -hashfile "<filename>" MD5

@karges612
Copy link

curl https://94XXXX.execute-api.us-west-1.amazonaws.com/uploadtest --request POST --header 'Content-Type: application/json --data '{"object_key": "C:\Users\seeth\Downloads\download.jpg"}'

am using the above url and getting the error , "Could not parse request body into json: Could not parse payload into json:

@karges612
Copy link

its my bad , there is a space between content type and json . but even after clearing it , this is the error am getting "object key missing"
below is the command.
curl https://94drtx3a0a.execute-api.us-west-1.amazonaws.com/uploadtest --request POST --header 'Content-Type:application/json' {"object_key": "C:\poclog.txt"}'

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment