Skip to content

Instantly share code, notes, and snippets.

@chrismccord
Last active September 20, 2024 17:54
Show Gist options
  • Save chrismccord/37862f1f8b1f5148644b75d20d1cb073 to your computer and use it in GitHub Desktop.
Save chrismccord/37862f1f8b1f5148644b75d20d1cb073 to your computer and use it in GitHub Desktop.
Simple, dependency-free S3 Form Upload using HTTP POST sigv4
defmodule SimpleS3Upload do
@moduledoc """
Dependency-free S3 Form Upload using HTTP POST sigv4
https://docs.aws.amazon.com/AmazonS3/latest/API/sigv4-post-example.html
"""
@doc """
Signs a form upload.
The configuration is a map which must contain the following keys:
* `:region` - The AWS region, such as "us-east-1"
* `:access_key_id` - The AWS access key id
* `:secret_access_key` - The AWS secret access key
Returns a map of form fields to be used on the client via the JavaScript `FormData` API.
## Options
* `:key` - The required key of the object to be uploaded.
* `:max_file_size` - The required maximum allowed file size in bytes.
* `:content_type` - The required MIME type of the file to be uploaded.
* `:expires_in` - The required expiration time in milliseconds from now
before the signed upload expires.
## Examples
config = %{
region: "us-east-1",
access_key_id: System.fetch_env!("AWS_ACCESS_KEY_ID"),
secret_access_key: System.fetch_env!("AWS_SECRET_ACCESS_KEY")
}
{:ok, fields} =
SimpleS3Upload.sign_form_upload(config, "my-bucket",
key: "public/my-file-name",
content_type: "image/png",
max_file_size: 10_000,
expires_in: :timer.hours(1)
)
"""
def sign_form_upload(config, bucket, opts) do
key = Keyword.fetch!(opts, :key)
max_file_size = Keyword.fetch!(opts, :max_file_size)
content_type = Keyword.fetch!(opts, :content_type)
expires_in = Keyword.fetch!(opts, :expires_in)
expires_at = DateTime.add(DateTime.utc_now(), expires_in, :millisecond)
amz_date = amz_date(expires_at)
credential = credential(config, expires_at)
encoded_policy =
Base.encode64("""
{
"expiration": "#{DateTime.to_iso8601(expires_at)}",
"conditions": [
{"bucket": "#{bucket}"},
["eq", "$key", "#{key}"],
{"acl": "public-read"},
["eq", "$Content-Type", "#{content_type}"],
["content-length-range", 0, #{max_file_size}],
{"x-amz-server-side-encryption": "AES256"},
{"x-amz-credential": "#{credential}"},
{"x-amz-algorithm": "AWS4-HMAC-SHA256"},
{"x-amz-date": "#{amz_date}"}
]
}
""")
fields = %{
"key" => key,
"acl" => "public-read",
"content-type" => content_type,
"x-amz-server-side-encryption" => "AES256",
"x-amz-credential" => credential,
"x-amz-algorithm" => "AWS4-HMAC-SHA256",
"x-amz-date" => amz_date,
"policy" => encoded_policy,
"x-amz-signature" => signature(config, expires_at, encoded_policy)
}
{:ok, fields}
end
defp amz_date(time) do
time
|> NaiveDateTime.to_iso8601()
|> String.split(".")
|> List.first()
|> String.replace("-", "")
|> String.replace(":", "")
|> Kernel.<>("Z")
end
defp credential(%{} = config, %DateTime{} = expires_at) do
"#{config.access_key_id}/#{short_date(expires_at)}/#{config.region}/s3/aws4_request"
end
defp signature(config, %DateTime{} = expires_at, encoded_policy) do
config
|> signing_key(expires_at, "s3")
|> sha256(encoded_policy)
|> Base.encode16(case: :lower)
end
defp signing_key(%{} = config, %DateTime{} = expires_at, service) when service in ["s3"] do
amz_date = short_date(expires_at)
%{secret_access_key: secret, region: region} = config
("AWS4" <> secret)
|> sha256(amz_date)
|> sha256(region)
|> sha256(service)
|> sha256("aws4_request")
end
defp short_date(%DateTime{} = expires_at) do
expires_at
|> amz_date()
|> String.slice(0..7)
end
defp sha256(secret, msg), do: :crypto.hmac(:sha256, secret, msg)
end
@dgigafox
Copy link

I tried this with GCS. As far as I know, x-amz extensions should work with GCS as well but I am always getting this error:

XHRPOSThttps://storage.googleapis.com/my-bucket
CORS Missing Allow Origin

1

<?xml version='1.0' encoding='UTF-8'?><Error><Code>SignatureDoesNotMatch</Code><Message>The request signature we calculated does not match the signature you provided. Check your Google secret key and signing method.</Message><StringToSign>ewogI...</StringToSign></Error>

Actually, I am stuck here with this problem for months already. 😬
Here is the gist that I tried for GCS https://gist.github.com/dgigafox/c293a252d2ad97f5cdeb4c3759313ba5
Some notable changes I made are I usedDateTime.utc_now() on amz_date = amz_date(DateTime.utc_now()) and removed x-amz-server-side-encryption on both policy and fields as it was not defined as field for GCS policy.

@genevievecurry
Copy link

This has been super helpful! Thank you!

I'm working on a new LiveView project with Elixir 1.12.0. and OTP 24 and I ran into an issue with this line:

defp sha256(secret, msg), do: :crypto.hmac(:sha256, secret, msg)`

which raises a function :crypto.hmac/3 is undefined or private error. It looks like updating the line to this may solve the issue:

defp sha256(secret, msg), do: :crypto.mac(:hmac, :sha256, secret, msg)

I found this in plug_crypto, does that look right to you?

@plicjo
Copy link

plicjo commented Sep 2, 2021

@genevievecurry I'm also using that line in my LiveView apps that are doing file uploads.

  defp sha256(secret, msg), do: :crypto.mac(:hmac, :sha256, secret, msg)

Would be great to see this added as a comment @chrismccord for Erlang 24+

@aswinmohanme
Copy link

Does anyone know a way to set the cache header on the file ? I couldn't figure it out.

@ohashijr
Copy link

It looks like some setting has changed I'm getting access denied.

@theycallmehero
Copy link

@chrismccord What would be the proper way to test this with automatic tests?

@eriknaslund
Copy link

Small gotcha that might be worth knowing about if someone else runs into the same problem.

Setting {"acl": "public-read"}, and "acl" => "public-read", will cause an error when submitting the upload if your S3 bucket has "Block public access (BlockPublicAcls)" enabled. This is a common thing to have if the bucket is behind a CloudFront distribution and not intended to be accessed directly.

If this applies to you, simply remove the two lines of code mentioned above and things should work fine.

@joshchernoff
Copy link

This has been super helpful! Thank you!

I'm working on a new LiveView project with Elixir 1.12.0. and OTP 24 and I ran into an issue with this line:

defp sha256(secret, msg), do: :crypto.hmac(:sha256, secret, msg)`

which raises a function :crypto.hmac/3 is undefined or private error. It looks like updating the line to this may solve the issue:

defp sha256(secret, msg), do: :crypto.mac(:hmac, :sha256, secret, msg)

I found this in plug_crypto, does that look right to you?

A good read related to said issue I've found. https://www.erlang.org/doc/apps/crypto/new_api.html

@denvaar
Copy link

denvaar commented May 18, 2022

Similar to this, I made a module that can be used to generate a presigned url. Use at own risk though, there's probably some bugs. https://gist.github.com/denvaar/66721b7a2f54f90592a509d29f57f831

@sushilbansal
Copy link

Small gotcha that might be worth knowing about if someone else runs into the same problem.

Setting {"acl": "public-read"}, and "acl" => "public-read", will cause an error when submitting the upload if your S3 bucket has "Block public access (BlockPublicAcls)" enabled. This is a common thing to have if the bucket is behind a CloudFront distribution and not intended to be accessed directly.

If this applies to you, simply remove the two lines of code mentioned above and things should work fine.

this helped me with the cors issue which i could not really understand why it was happening.

@mosiac05
Copy link

mosiac05 commented Dec 7, 2023

Small gotcha that might be worth knowing about if someone else runs into the same problem.

Setting {"acl": "public-read"}, and "acl" => "public-read", will cause an error when submitting the upload if your S3 bucket has "Block public access (BlockPublicAcls)" enabled. This is a common thing to have if the bucket is behind a CloudFront distribution and not intended to be accessed directly.

If this applies to you, simply remove the two lines of code mentioned above and things should work fine.

You are a life saver. Thanks!

@Aman7097
Copy link

Aman7097 commented May 8, 2024

Does the code works to upload image in the private bucket?

@thistlefluv
Copy link

thistlefluv commented May 30, 2024

After successfully getting it working locally, I had to solve those problems after deploying to production.

First my production runs https, so I have to change http to https. i.e. Change

meta = %{uploader: "S3", key: key, url: "http://#{bucket}.s3-#{config.region}.amazonaws.com", fields: fields}

to

meta = %{uploader: "S3", key: key, url: "https://#{bucket}.s3-#{config.region}.amazonaws.com", fields: fields}

After that I kept seeing CORS error. But running the result of copy as curl from browser developer console shows me:

curl: (60) SSL: no alternative certificate subject name matches target host name 'cdn.fluv.com.s3-ap-northeast-1.amazonaws.com'

This is caused by having dot or dots in my bucket name, after changing all . to -, it starts working.

P.s. Because AWS uses bucket name as part of the hostname, so according to RFC952, using [-0-9A-Za-z] for bucket names is probably is wisest thing to do.

@Luisfelipeqt
Copy link

theres a way to resize the image and processing before updating to AWS?

@marschro
Copy link

marschro commented Jul 9, 2024

Anyone else having issues with CORS?

  • I use Vultr Object Storage.
  • PUT works fine, but Vultr says they are also POST uploads compatible with S3.
  • I raised a ticket at Vultr as I guess they messed something up.
  • CORS policies are set exactly how Chris mentioned in the documentation on external uploads. But still, Vultr resource responds with:
    No 'Access-Control-Allow-Origin' header is present on the requested resource

As xhr posts use preflights, I thought like "OPTIONS" must be also added to the CORSRules but adding this, responds in a "malformed xml" error response by the object storage.

I investigate that now for days and hopefully its a Vultr issue - I will update as soon as there is some news.
Meanwhile I am happy for any hint :)

@marschro
Copy link

Anyone else having issues with CORS?

  • I use Vultr Object Storage.
  • PUT works fine, but Vultr says they are also POST uploads compatible with S3.
  • I raised a ticket at Vultr as I guess they messed something up.
  • CORS policies are set exactly how Chris mentioned in the documentation on external uploads. But still, Vultr resource responds with:
    No 'Access-Control-Allow-Origin' header is present on the requested resource

As xhr posts use preflights, I thought like "OPTIONS" must be also added to the CORSRules but adding this, responds in a "malformed xml" error response by the object storage.

I investigate that now for days and hopefully its a Vultr issue - I will update as soon as there is some news. Meanwhile I am happy for any hint :)

And as promised, the result:

Vultr responded and indeed fixed smaller things. But that was more an additional side effect.
The important information to all, using Vultr S3 compliant Object Storage:

  • POST uploads are supported by Vultr
  • But they do not support SSE-S3 server side encryption
  • But they support SSE-C - more to know about that in that document: Vultr SSE-C

Solution to the issue:
In the policy set SSE to an empty string => of course, then its not SSE...

{"x-amz-server-side-encryption": ""},

also change it to an empty value in the header fields:

fields =
      %{
        "key" => key,
        "acl" => "public-read",
        "content-type" => content_type,
        "x-amz-server-side-encryption" => "",
        "x-amz-credential" => credential,
        "x-amz-algorithm" => "AWS4-HMAC-SHA256",
        "x-amz-date" => amz_date,
        "policy" => encoded_policy,
        "x-amz-signature" => signature(config, expires_at, encoded_policy)
      }

Hope that helped anyone, as It took me some hours :)

Cheers!
And Thanks to Vultr for their prompt support that helped to solve this!

@joshchernoff
Copy link

theres a way to resize the image and processing before updating to AWS?

I've done this before using a wasm lib and cavas.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment