Instantly share code, notes, and snippets.

Embed
What would you like to do?
Use S3 and CloudFront to host Static Single Page Apps (SPAs) with HTTPs and www-redirects. Also covers deployments.

S3 Static Sites

What this will cover

  • Host a static website at S3
  • Redirect www.website.com to website.com
  • Website can be an SPA (requiring all requests to return index.html)
  • Free AWS SSL certs
  • Deployment with CDN invalidation

Resources

S3 Bucket

  • Create an S3 bucket named exactly after the domain name, for example website.com.
  • In Properties, click the Static Website section.
    • Click Use this bucket to host a website and enter index.html into Index Document field.
    • Don't enter anything else in this form.
    • This will create an "endpoint" on the same screen similar to http://website.com.s3-website-us-east-1.amazonaws.com.
  • Then click on Permissions tab, then Bucket Policy. Enter this policy:
{
    "Version": "2008-10-17",
    "Statement": [
        {
            "Sid": "AllowPublicRead",
            "Effect": "Allow",
            "Principal": {
                "AWS": "*"
            },
            "Action": "s3:GetObject",
            "Resource": "arn:aws:s3:::BUCKET_NAME/*"
        }
    ]
}

Be sure to replace BUCKET_NAME with yours.

Note: Naming the bucket doesn't have to be exactly the domain name. I read that in several articles that it needed to be, but it doesn't. If using wildcard domains with AWS, I've read that we can't have dots in the domain name when using wildcard domains. So just know that you can name the bucket whatever, but using dots does work if not using wildcard domains

Uploading an index.html should allow us to visit the "endpoint"

CloudFront

  • Go to the CloudFront section and click Create Distribution and then create for Web, not RTMP.
  • In Origin Domain Name, paste the "endpoint" previously created in S3 (without the http:// part). Note that when you click on this field it will act like a dropdown with options to your existing buckets. I think you can just select one of those two which is a valid list of your S3 buckets.
  • The order of these instructions assume SSL certificates are not setup yet. So don't do anything with settings regarding SSL
  • Select "yes" for Compress Objects Automatically.
  • In Alternate Domain Names (CNAMEs), put the domain names which you want to correspond to this bucket. Put each on their own line OR separated by comma. The reason why you may have two or more is something like this: mywebsite.com and www.mywebsite.com. The field is called "Alternative Domain Names" because AWS will have an aws-specific domain name for the CDN, but you don't want to use that so you'll want to put in your custom domains and then use Route 53 (next section) to point domains to the CDN.
  • In Default Root Object, type index.html.
  • Create. The next screen will show distributions in table form, the one we just made will be "in progress" for a few minutes

The distribution will have a domain name like dpo155j0y52ps.cloudfront.net. This is important for DNS (see below). So copy it somehwere.

Route 53

These DNS instructions assume your DNS is hosted at AWS. This does not mean you have to buy a domain at AWS, it just means that when you buy a domain at somewhere like Google or GoDaddy, over there you need to point NS records to AWS to allow AWS to manage the parts of the DNS record. But first, at AWS is where you create the "Hosted Zone" which is where you create the NS values to eventually give to Google or GoDaddy, etc. I don't know how any of this is different if you buy your domain at AWS (But then again I never buy domains at the same place I host)

  • Click Hosted Zones
  • Create a new Zone: Use the domain name (mywebsite.com without sub domain) for zone. Note that each domain name will get one zone, sub domains all belong to the same zone.
  • This should create NS records such as:
ns-1208.awsdns-23.org. 
ns-2016.awsdns-60.co.uk. 
ns-642.awsdns-16.net. 
ns-243.awsdns-30.com.
  • The NS records can be used to point DNS management from other domain registrar to AWS Route 53
  • Click Create Record Set to create an A record.
    • This will be the record that points mywebsite.com to CloudFront.
    • For the name, enter no value
    • Change Alias to Yes
    • Paste the CloutFront domain in the Alias field
      • This should look like [some-random-number].couldfront.net. You can get this by clicking your CloudFront distribution and in the General tab there is a "Domain Name" label.
    • Click Create Record Set
  • Create another A record for the www redirect
    • Follow the same steps for the previous A record, but enter www for name and use the same CloudFront domain. But note this is because we want www.mywebsite.com and mywebsite.com to point to the same bucket (and therefore the same CloudFront domain). I suppose you would make a whole new bucket and a whole new CloudFront distrubution (with a new CF domain) if you wanted a second project at app.mywebsite.com. This might be common if you app is a React app that is completly separate code from your "home page" website which might be from a static site generator or something.

HTTPS

In the AWS Console, go to Certificate Manager and request a cert for domain and all sub domains. We will be required to verify certificate via email or DNS. If verifying by email, AWS will look up the public DNS owner information and use up to three emails it finds there (if your domain ownership info is public). But even if it's not public, AWS will also use these (that you don't get to choose from)

  • administrator@mywebsite.com
  • hostmaster@mywebsite.com
  • postmaster@mywebsite.com
  • webmaster@mywebsite.com
  • admin@mywebsite.com

If your company uses "webmaster@", hats off to you, because your app is probably 1000 years old.

For .io TLDs: http://docs.aws.amazon.com/acm/latest/userguide/troubleshoot-iodomains.html

If you choose to verify via DNS, AWS will ask you to add some CNAME records to your Route 53 DNS, but the nice thing is that there is a shortcut button to do so (for each domain and sub domain) from within the Certificate Manager section.

After the verification is done and the cert is "issued", we can go back into CloudFont to edit our distribution for this domain:

  • Click the distribution and on the next page (in the General tab), click Edit
  • Check the box for Custom SSL Certificate
  • Select our cert and save. Note that what looks like a text field is really a dropdown menu once you click it to choose your certificate
  • When done with the form, click the Behaviors tab and edit the only record that should be there
  • Select Redirect HTTP to HTTPS. Click Save

SPA

If the website is an SPA, then we need to make sure all requests to the server (S3 in this case) return something even if no file exists. This is becuase SPAs like React (with React Router) need the index.html page for every requests, then things like "not found" pages are handled in the front-end.

Go to CloudFront and click the distribution you want to apply these SPA settings to. Click the Error Pages tab and add a new error page. Fill the form with these fields:

  • HTTP Error Code: 404
  • TTL: 0
  • Custom Error Response: Yes
  • Response Page Path: /index.html
  • HTTP Response Code: 200

Deployment

For deployment, we need to consider that files in the CloudFront CDN are not meant to change. If we were to upload new files to S3, they would not be deployed to the CDN's edge servers and therefore would not update the website. Read More.

To invalidate files on the CDN we'll need to use CloudFront's invalidations feature: Read More.

In the AWS console, in the CloudFront management of a distribution, there is a tab for Invalidations. We could manually create an invalidation (with the value of /*) to invalidate all S3 files. Note that invalidation records here are one-time invalidations and every time we deploy new files, we will need to make a new invalidation.

To deploy with invalidations, we will need to install AWS-CLI first. We also assume you have an IAM user from AWS with an Access Key and Secret Access Key.

To test installation, do:

aws --version

Configure aws-cli:

aws configure --profile PICK_A_PROFILE_NAME

Note that using "profiles" to configure AWS-CLI is probably best since you might want to use the CLI to manage multiple AWS accounts at some point. Be sure to swap out PICK_A_PROFILE_NAME for your name choice (can be anything).

Enter these values:

AWS Access Key ID [None]: [Your Access Key]
AWS Secret Access Key [None]: [Your Secret Access Key]
Default region name [None]: us-east-1
Default output format [None]: json

This will save your entries at ~/.aws/credentials. Note that you need to enter your correct region for your AWS stuff. I used us-east-1, but make sure to use the correct one for you. Also note that you can have responses in text instead of json if you want

You can ommit the last two questions for region and format if you want to set up a default for your computer (that all profiles will use). The default profile is located at ~/.aws/config. If you omit the region and format from your profile, be sure they exist in your ~/.aws/config as:

[default]
output = json
region = us-east-1

Now, since we'll need to do some CloudFront commands which are "experimental", we need to do:

aws configure set preview.cloudfront true

This will result in more records at ~/.aws/config.

We should be setup now to dest a deployment. Run:

aws s3 sync --acl public-read --profile YOUR_PROFILE_NAME --delete build/ s3://BUCKET_NAME
  • Obviously replace YOUR_PROFILE_NAME and BUCKET_NAME with yours. Also this assumes the folder you want to upload is build.
  • This command will
    • Ensure all new files uploaded are public (--acl public-read)
    • Ensure we're using your credentials from your local AWS profile (--profile YOUR_PROFILE_NAME)
    • Remove any existing S3 objects that don't exist locally (--delete)

After deployment is verified and successful, we need to invalidate:

aws cloudfront --profile YOUR_PROFILE_NAME create-invalidation --distribution-id YOUR_DISTRIBUTION_ID --paths '/*'
  • Obviously replace YOUR_PROFILE_NAME and YOUR_DISTRIBUTION_ID with yours. Note that your Distribution ID can be found in the CloudFront seciton of AWS console.
  • If the invalidation worked, you'll be able to see a record of it in the Invalidations tab after clicking on your distribution.

To make it all easier, add to package.json:

  "scripts": {
    "deploy": "aws s3 sync --acl public-read --profile XYZ --delete build/ s3://XYX && npm run invalidate",
    "invalidate": "aws cloudfront --profile XYZ create-invalidation --distribution-id XYZ --paths '/*'"
  },

XYZ is for all the parts that need to be replaced. Now you can run npm run deploy which will deploy then invalidate

Cheers!

@fubar

This comment has been minimized.

fubar commented Mar 17, 2018

Very handy write-up, thanks! FYI, I got 403s (instead of 404s) for my routes and needed to add a Cloudfront Error Page for 403s before it would work.

@pratheekhegde

This comment has been minimized.

pratheekhegde commented Mar 28, 2018

The policy set for the bucket will also allow public access from the S3 website URL of the bucket. Isn't this bad?

{
    "Version": "2008-10-17",
    "Statement": [
        {
            "Sid": "AllowPublicRead",
            "Effect": "Allow",
            "Principal": {
                "AWS": "*" 
            },
            "Action": "s3:GetObject",
            "Resource": "arn:aws:s3:::BUCKET_NAME/*"
        }
    ]
}
@stavros-zavrakas

This comment has been minimized.

stavros-zavrakas commented Mar 29, 2018

In Alternate Domain Names (CNAMEs), put the domain names which correspond to the two buckets. Put each on their own line OR separated by comma.

We just created one bucket. What do you mean with this? Can you explain a bit more how to find the alternate domain names?

@lastlink

This comment has been minimized.

lastlink commented Apr 24, 2018

How would you handle multiple SPA's in subdirectories? e.g. bucket/test/index.html

@debugpoint136

This comment has been minimized.

debugpoint136 commented May 1, 2018

I have a React App and built a component to upload files to S3. When deployed on S3 as static website, how to hide .env parameters, like AWS ID and secret key?

@ergusto

This comment has been minimized.

ergusto commented May 3, 2018

I am also very interested in the question lastlink asked.

@innergap

This comment has been minimized.

innergap commented May 10, 2018

Awesome article. Bookmarking!

@jamesgaddum

This comment has been minimized.

jamesgaddum commented May 16, 2018

💯 great article, was very helpful in implementing an SPA

@bogretsovv

This comment has been minimized.

bogretsovv commented May 17, 2018

Hello,
Thank you for the great article, but I have a question about error handling. If I make a request from JS to a REST API backend and it returns 404, the index.html will be returned too instead of API error (because of custom error page). What is the right way to handle such cases?

@richessler

This comment has been minimized.

richessler commented May 18, 2018

You, Sir, are a Saint. 💯 * 💯 - namely about the invalidation process

@strongpauly

This comment has been minimized.

strongpauly commented May 26, 2018

@bogretsovv The api should have it's own url, hosting and error handling separate from this static webpage.

@keithdmoore

This comment has been minimized.

keithdmoore commented Jun 17, 2018

@bradwestfall Thanks for creating this and sharing. Great info here! The cloudfront invalidation notes were great!

@dlopuch

This comment has been minimized.

dlopuch commented Jul 19, 2018

Great gist.

Re: @pratheekhegde public bucket-access policy question, yes, the bucket policy in the gist grants public access to the s3 bucket. Almost certainly not what you want if you want "an https site" or cloudfront to be the only way to access the bucket.

When you're creating your Cloudfront distribution, there's a "Restrict Bucket Access" Yes/No question. You can answer Yes then a new question pops up: "Origin Access Identity". Answer Create a New Identity.

This creates a new CloudFront Origin Access Identity and automagically updates your bucket's Bucket Policy to be something like this:

{
    "Version": "2008-10-17",
    "Statement": [
        {
            "Sid": "AllowPublicRead",
            "Effect": "Allow",
            "Principal": {
                "AWS": "*"
            },
            "Action": "s3:GetObject",
            "Resource": "arn:aws:s3:::www.mysite.com/*"
        },
        {
            "Sid": "2",
            "Effect": "Allow",
            "Principal": {
                "AWS": "arn:aws:iam::cloudfront:user/CloudFront Origin Access Identity S0M3H4SHC0D3"
            },
            "Action": "s3:GetObject",
            "Resource": "arn:aws:s3:::www.mysite.com/*"
        }
    ]
}

You can then remove the top "AllowPublicRead" statement if you added it originally as part of this gist, otherwise you should be fine with the cloudfront-only origin access identity.

@bradwestfall

This comment has been minimized.

Owner

bradwestfall commented Aug 6, 2018

I was not notified of any of these by GitHub, even with @ my name. So perhaps none of you will be notified of this. But just in case:

@stavros-zavrakas At one point I think I was trying to setup a www bucket specifically for a redirect. I updated the language to be more clear though.

@lastlink @ergusto I probably wouldn't try to do multiple SPA's within dirs like this mywebsite.com/app1 mywebsite.com/app2. Instead I would host those in different buckets, each with their own CloudFront Dist, and then do app1.mywebsite.com app2.mywebsite.com

@debugpoint136 I'm not really sure

@bogretsovv The error handling I described is for when someone visits mywebsite.com/some-sub-page directly and this would ordinarily cause a 404, so my instructions show you how to make S3 respond with the index file and a 200 when a file isn't found (like some-sub-page). I think what you're talking about is when the JS front-end then wants to talk to your database through some REST-API. In that case, I would setup api.mywebsite.com (or some sub domain) which your front-end can talk to and this API endpoint would not be an S3 static site endpoint, it would be some sort of bonafide backend with real response codes for 404.

@dlopuch Thanks, I just couldn't get it working though

@EmilEriksen

This comment has been minimized.

EmilEriksen commented Aug 10, 2018

@dlopuch @bradwestfall I also couldn't get it to work with the more restricted policy at first. Turns out the trick was just waiting long enough (in my case maybe 3 or 4 hours). What happened at first was that I'd just get redirected from the CloudFront URL to the S3 URL which would cause a forbidden error because of the bucket policy. This can be an issue with newly created buckets and CF-distributions apparently. So these are the steps needed to get it to work with the more restricted policy:

  1. Create bucket with default permissions (no public access). I don't even think you actually have to enable static website hosting although I haven't tested it.
  2. Create a CF distribution and in Origin Domain Name select the bucket you just created from the dropdown instead of pasting the endpoint URL as described in this tutorial.
  3. In Restrict Bucket Access select Yes. In Origin Access Identity select Create a New Identity (or Use an Existing Identity if you already have an identity). In Grant Read Permissions on Bucket select Yes, Update Bucket Policy. Otherwise configure everything as described in this tutorial.
  4. Wait 3 to 4 hours (maybe more - that's how long I had to wait). You can setup SSL, CNAMEs etc. while you wait.
@Murz1k

This comment has been minimized.

Murz1k commented Aug 19, 2018

Thank you, man! Very helpful!

@lucashfreitas

This comment has been minimized.

lucashfreitas commented Aug 24, 2018

Thank you! If you don't mind, I'd like to ask you some questions. How the caching and updating of front end website hosted on Cloud Front works?

  • When I upload a new files to the bucket I will see the changes IMEDIATELY after accessing the site again? If not, how long it will take?
  • I need to make an invalidate request to see the changes working on the website? If yes, how long it will take?

What is the best approach to deal with cache and update of Single Page applications in Cloud Front? (I am using a React Application and Webpack)

@bradwestfall

This comment has been minimized.

Owner

bradwestfall commented Aug 29, 2018

@lucashfreitas You need to do the invalidations if you're using the CloudFront CDN, otherwise some places in the world might not get the latest files in the S3 bucket, even if you can go to the website and see the changes, others might not because they might be connecting to a different CDN endpoint. The invalidations as I described take like 30 seconds or less

@americoneto1

This comment has been minimized.

americoneto1 commented Sep 6, 2018

@OyoKooN

This comment has been minimized.

OyoKooN commented Sep 20, 2018

Nice work, thanks! ☺️

@phoenecke

This comment has been minimized.

phoenecke commented Oct 1, 2018

@bradwestfall I also got 403 from S3. After adding another policy entry to allow s3:ListBucket I started getting 404 instead.

I have the same issue as @bogretsovv. I would prefer to not use api.mywebsite.com to avoid turning on CORS. What I really want is to be able to setup a custom error response that is specific to my S3 origin, and pass through both 404 and 403 from my API origin. Not really a question just hoping someone might have a perfect solution here.

@elliotaplant

This comment has been minimized.

elliotaplant commented Nov 2, 2018

I really appreciate the post! Thanks.

One thing that got me a bit stuck was the invalidation command. If you're going to run this from a shell, make sure you keep the quotes around the invalidation paths option:

>> aws cloudfront --profile XYZ create-invalidation --distribution-id XYZ --paths '/*'

If you don't put the quotes there, you won't get an error since you are creating an invalidation for / on your distribution

@LucasLopesr

This comment has been minimized.

LucasLopesr commented Nov 27, 2018

Nice work, thanks!

@carlyman

This comment has been minimized.

carlyman commented Dec 12, 2018

@bradwestfall: I'm having one problem, tho...is this supposed to redirect from www to non-www (i.e. such that the URL in the address bar never shows www)? I can access my site with and without, but it doesn't act like a re-write.

Nonetheless...this is an awesome guide; so many old and incorrect how-tos out there.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment