Skip to content

Instantly share code, notes, and snippets.

@ftpmorph
Last active April 22, 2024 19:47
Show Gist options
  • Star 13 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save ftpmorph/299c00907c827fbca883eeb45e6a7dc4 to your computer and use it in GitHub Desktop.
Save ftpmorph/299c00907c827fbca883eeb45e6a7dc4 to your computer and use it in GitHub Desktop.
Amazon AWS S3 IAM permissions required for Mastodon
When setting up a Mastodon instance I had a very hard time working out the required S3 permissions.
Wasted a day on it. None of the tutorials or even the official documentation gave me this information.
In the end I gave up and just gave it blanket access to all permissions for the Mastodon bucket (S3Administrator).
But this didn't set well with me - I don't like granting unnecessary permissions, especially not when S3 has about 100 of them.
If the server were to become compromised or the keys were to otherwise fall into the wrong hands I'd want a potentially malicious actor to have as limited permissions as possible.
Anyway I finally worked out the permissions required to for Mastodon to function with an S3 bucket as its media storage.
See below for the IAM policy.
Make sure you replace "yourbucketname" with the actual name of your bucket.
Also make sure you've allowed objects within the bucket to be set as public in the bucket's permission settings in the S3 Console.
Then all you need to do is create a new policy in IAM manager, probably have to select S3Administrator, then hit review, edit the JSON file, and remove what's there and paste the policy below instead.
(Note: AWS changes their UI constantly so the exact instructions may differ by the time you read this, just make sure you find the option to edit the JSON of the IAM policy.)
Then assign that policy to a user, generate keys for that new user, and put them in your .env.production in your Mastodon config.
You want your .env.production config to look like this for S3 settings:
S3_ENABLED=true
S3_PROTOCOL=https
S3_BUCKET=yourbucketname
S3_REGION=eu-west-2
S3_HOSTNAME=yourbucketname.s3.amazonaws.com
S3_PERMISSIONS=public-read
AWS_ACCESS_KEY_ID=XXXXXXX
AWS_SECRET_ACCESS_KEY=420blaze69
Most importantly there, you want to ensure the S3_HOSTNAME is just "yourbucketname.s3.amazonaws.com"
Do NOT include your region in the hostname! Many other tutorials tell you that you should do this but this is only applicable if you are using the CloudFront CDN.
If you are connecting direct to an S3 bucket, you want to keep the region out of the hostname or you'll get a 404 or some other error due to the way Mastodon's code uses this info to put the S3 request URLs together.
Ensure you change the region to where your bucket actually is, of course, and put your real access key and secret in.
You probably do not actually need the "S3_PERMISSIONS" line but it doesn't hurt anything.
"public-read" means only you (the AWS account owner) and those you've specifically granted access (such as your Mastodon user here) have write access to the bucket, while everyone else can only read uploaded files.
When users upload files to your Mastodon instance the request is sent to the backend where Mastodon uses these keys to write a file.
This is the desired configuration. The keys should NEVER be shared.
Note: although I am using this config with Mastodon, it is highly likely it also applies to pretty much any use case where you want a web app to be able to upload content from somewhere to an S3 bucket then display it to users.
Good luck, hope this helps someone!
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "LimitedPermissionsToOneBucket",
"Effect": "Allow",
"Action": [
"s3:ListBucket",
"s3:GetBucketLocation"
],
"Resource": ["arn:aws:s3:::yourbucketname"]
},
{
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:PutObjectAcl",
"s3:GetObject",
"s3:GetObjectAcl",
"s3:DeleteObject",
"s3:ListMultipartUploadParts",
"s3:AbortMultipartUpload"
],
"Resource": ["arn:aws:s3:::yourbucketname/*"]
}
]
}
@dephekt
Copy link

dephekt commented Nov 18, 2022

Thanks for posting this, as it helped me get started in figuring out the required S3 stuff to make this all work through a CDN. Minor clarification regarding S3_PERMISSION since this caused me real problems. The S3_PERMISSION by default is set to public-read here if S3_PERMISSION is not specified in your Mastodon config. Also, you call it S3_PERMISSIONS but in reality it's S3_PERMISSION. Internally (e.g. their runtime code) uses s3_permissions which is an unforgivable inconsistency with the environment variable name and maybe where you got the plural from.

Since this is all completely undocumented by Mastodon server docs, I wanted to go more in-depth here to save anyone else time trying to figure this stuff out, as I had to dive into Mastodon's Ruby code to find straight answers, and I don't even know Ruby.

Mastodon is using this Rails gem called Paperclip (a deprecated project, by the way) which appears to be an abstraction on top of another gem called ActiveStorage. ActiveStorage is ultimately what is handling the S3 API calls for Mastodon.

ACLs

The reason you needed the s3:PutObjectAcl permission in your policy is because, as mentioned above, by default Mastodon tries to set a "public-read" ACL when uploading files. You can change this object-level (not bucket level) permission by modifying S3_PERMISSION to something else. The out-of-the-box "canned ACLs" in AWS are:

  • private
  • public-read
  • public-read-write
  • authenticated-read
  • aws-exec-read
  • bucket-owner-read
  • bucket-owner-full-control

Object ownership: Bucket owner enforced & Cloudfront

This gave me issues because I am using the recommended AWS bucket configuration for object ownership called "bucket owner enforced". In this case, ACLs are completely disabled. Access to objects is controlled strictly via attached policies, not ACL.

In my case, there is no public access (not even read-only), to objects in my Mastodon bucket. The only access granted is to the Cloudfront service principal so Cloudfront can access objects in the bucket to serve them over my CDN. I have a bucket policy like this:

{
    "Version": "2008-10-17",
    "Id": "PolicyForCloudFrontPrivateContent",
    "Statement": [
        {
            "Sid": "AllowCloudFrontServicePrincipal",
            "Effect": "Allow",
            "Principal": {
                "Service": "cloudfront.amazonaws.com"
            },
            "Action": "s3:GetObject",
            "Resource": "arn:aws:s3:::toot-assets/*",
            "Condition": {
                "StringEquals": {
                    "AWS:SourceArn": "arn:aws:cloudfront::MY_AWS_ACCOUNT:distribution/CF_DISTRIBUTION"
                }
            }
        }
    ]
}

In this situation, since ACLs are disabled, the S3 API will fail to perform PUT calls (see "warning" note in this linked article) if there's an ACL given. This means the default setup of Mastodon will fail to write objects to S3 if you are using a similar configuration for your S3 bucket (which is the default - and AWS recommended config).

To make such a situation work, you need to set S3_PERMISSION= in your config, so that it gets internally set to nil by Mastodon server when this code is evaluated. That will cause it to not try to set ACL permissions at all, which results in PUT calls being successful and leaves bucket object permissions up to the bucket's applied policies.

Notes

Someone may have noticed my bucket policy grants no access to Mastodon to get and write objects to it. In my case, I handle this at the API user level. In IAM, I have a user group AmazonS3MastodonAccess and that group has an attached policy called MastodonServerAccessS3Bucket. That policy looks like this:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "VisualEditor0",
            "Effect": "Allow",
            "Action": [
                "s3:ListBucket",
                "s3:GetBucketLocation"
            ],
            "Resource": "arn:aws:s3:::toot-assets"
        },
        {
            "Sid": "VisualEditor1",
            "Effect": "Allow",
            "Action": [
                "s3:PutObject",
                "s3:GetObjectAcl",
                "s3:GetObject",
                "s3:AbortMultipartUpload",
                "s3:DeleteObject",
                "s3:PutObjectAcl",
                "s3:ListMultipartUploadParts"
            ],
            "Resource": "arn:aws:s3:::toot-assets/*"
        }
    ]
}

Then I create a user in IAM called "s3-mastodon" that has API access only granted, then it's put into the AmazonS3MastodonAccess group and therefore inherits the above policy and gains private access to the Mastodon bucket.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment