Skip to content

Instantly share code, notes, and snippets.

@cybercussion
Last active June 20, 2019 14:35
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
Star You must be signed in to star a gist
Save cybercussion/685ebad2e7d7e2d427950182bc9b1b46 to your computer and use it in GitHub Desktop.

General Tips for CloudFront + S3 for Static Sites or going "headless"

This assumes you've setup a AWS Account and logged into the AWS Console. Any one new to AWS there is also a CLI available. AWS will give you a root AccessKey and Token, and from that point forward you can configure users through IAM.

Important things to note since there is a order/method to your setup. Do things out of order and it may require you to Invalidate paths since CloudFront will cache your site in edge locations. You are allowed 2000 path updates until they start charging you. So if you have a ton of files, use that feature sparingly. What gets cached?

  • If you don't enable compression upfront (gzip) in CloudFront
  • If you don't set Cache-Control upfront in S3 (or any other headers)
  • If you forget to set S3 CORS settings or have values missing
  • If you change a file without any type of version hash

Failure to do these things up front as I've elluded to will result in you repeatedly invalidating your cache. I use a tool like GTMetrix.com to hit the site, to verify and validate everything is working as intended.

If you are coming in with a fresh account, you can qualify for free-tier hosting for 12 months. You may be slightly fearful of doing something that results in a high bill. Thats why I'm writing this up.

Running a EC2 24/7 may have no benefit too you and depending on your settings can run you $13 - 30 + per month. I personally am moving more towards static sites and then utilzing AP IGateway, Lamdas and beyond so I'm only paying for what I use vs having this fee for running a EC2 all month. You can still send emails via the API Gateway, Lamdas and SES.

Tip: You'll also need access to your registrar and or DNS config so you can properly link your domain back to CloudFront. You could be using Route53 on AWS or your own.

I'm keeping this simple for now, but you may want to consider CLI possibilities due to a downstream DevOps build/deploy model.

Setup S3

S3 Object Store allows you low cost storage that has endless space. It's important to note larger files (over 5GB) require you to chunk a put requests. Several tools do that so I'll recommend on the Mac Transmit or CyberDuck. These tools will also extend the ability to set Cloud based settings by file extenion, or at minimum the ability to edit the Cache-Control or Content-Type headers found in the S3 Properties, Metadata section at upload time.

We will be setting up a public bucket for a static html site. If you miss any of this first pass, you can always go back and tweak it further. Just remember my prior warnings about CloudFront caching.

  1. In AWS Console search for S3
  2. Create a bucket www.domain.com
  3. Go to Properties and enable Static Website Hosting
  4. Consider other options (Versioning, Logging etc...)
  5. Hit Permissions, uncheck all that - we are going public
  6. Bucket Policy (Hint, you need to use your domain)
{
    "Version": "2012-10-17",
    "Id": "PolicyForPublicWebsiteContent",
    "Statement": [
        {
            "Sid": "PublicReadGetObject",
            "Effect": "Allow",
            "Principal": {
                "AWS": "*"
            },
            "Action": "s3:GetObject",
            "Resource": "arn:aws:s3:::www.domain.com/*"
        }
    ]
}

S3 Non-Public Access Settings thru ACL & Cloudfront (option 2)

The below can be auto injected from CloudFront under Origin settings for the distrubtion. You essentially remove the public access and let ACL manage CloudFront's access to the Object Store.

{
    "Version": "2012-10-17",
    "Id": "PolicyForCloudFrontACLWebsiteContent",
    "Statement": [
        {
            "Sid": "2",
            "Effect": "Allow",
            "Principal": {
                "AWS": "arn:aws:iam::cloudfront:user/CloudFront Origin Access Identity **************"
            },
            "Action": "s3:GetObject",
            "Resource": "arn:aws:s3:::www.domain.com/*"
        }
    ]
}

Non-Public S3 Special note on Permissions

To prevent any accidental change to public access on a bucket's ACL, you can configure public access settings for the bucket. If you select Block new public ACLs and uploading public objects, then users can't add new public ACLs or upload public objects to the bucket. If you select Remove public access granted through public ACLs, then all existing or new public access granted by ACLs is overridden and denied.

See: S3, Bucket, Permissions, Block public Access.

Check Block all Public Access.

  1. CORS Configuration - Hint, no edit needed
<?xml version="1.0" encoding="UTF-8"?>
<CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<CORSRule>
    <AllowedOrigin>*</AllowedOrigin>
    <AllowedMethod>GET</AllowedMethod>
    <MaxAgeSeconds>3000</MaxAgeSeconds>
    <ExposeHeader>Content-Length</ExposeHeader>
    <ExposeHeader>Date</ExposeHeader>
    <ExposeHeader>ETag</ExposeHeader>
    <ExposeHeader>Connection</ExposeHeader>
    <AllowedHeader>Authorization</AllowedHeader>
    <AllowedHeader>Date</AllowedHeader>
    <AllowedHeader>Content-Type</AllowedHeader>
    <AllowedHeader>Content-Length</AllowedHeader>
</CORSRule>
</CORSConfiguration>

Using HTTPS?

If you are going to use HTTPS you'll now want to go to the Amazon Certificate Manager (ACM). We are going to create a public certificate. This is where I got a bit concerned since CloudFront was noting a $600 per month fee on HTTPS. As I read thru the documentation I could see this applied to pre-2010 (Legacy) certificates. There is still a surcharge for HTTPS requests, but its by no means that fee for legacy. You also have transfer charges related to your edge zones. Those can be reviewed on CloudFronts Pricing page (I digress).

  1. Search for ACM
  2. Request a Certificate
  3. I used Public, hit Request a certificate...

We did this because we will need it when performing the CloudFront Distribution.

Uploading to S3

You may at this stage want to upload your content to your new bucket. You can make some decisions about whether you want content directly in the root of the bucket, or if you just want to copy up your dist/ folder for ease removing or replacing files in the future. There are easy ways of doing this so I don't think any decision here will put you in a corner. Now if you opted to use Transmit you need to go to Transmit, Preferences and then select Cloud. You'll want to make entries for each file type you use.

Example:

  • css Cache-Control max-age=15778463 (6 months)

This will place these headers on the file in S3 and CloudFront will perform a pass-through since CloudFront does not directly manage headers like this. Faiure to do this will result in 'leverage caching' warnings in many web scanning tools by setting expires headers i.e. Cache-Control.

You can also individually do this in S3 under Metadata but seriously who wants to do that one by one.

CloudFront

  1. Create Distribution
  2. I pointed my Origin Domain and Path to my bucket www.domain.com.s3.amazonaws.com
  3. We will also want to use HTTP/HTTPS or just HTTPS
  4. Security Policy TLSv1.1 +
  5. HTTP/2, HTTPS/1.1, HTTP/1.0
  6. Default Root Object (index.html)
  7. I later edited a alternate domain name CNAME) for www.domain.com
  8. Choose your GET, HEAD, OPTIONS, PUT, POST, PATCH, DELETE
  9. I use Origin Cache Headers (Pass-though from S3)
  10. Chose to Compress Objects Automatically (gzip)
  11. Error Pages are important if you use a Framework with a Router. You'll want to set 404 and 403. I route them back to /index.html and toss a 200.
  12. You may want to consider Geo Restrictions if you aren't operating in areas of the world.
  13. Invalidations can be made * would flush everything. Or you can pass specific paths.

Once created you should get a Cloudfront Domain Name. ************.cloudfront.net I created a CNAME on my Registrar DNS entry to forward sub.domain.com to that. And as mentioned went back later and added a alternate domain so it would work correctly.

For Non-Public S3 Buckets, use ACL access. (see prior statement about non-public S3)

Under Origin Settings

  • Restrict Bucket Access (Yes)
  • Orgin Access Identity (Use existing or Create)
  • Grant Permissions (auto or manual) per above example

This would result in the bucketId.s3.amazonaws.com to be a 403. However, the cloudfront distrubtion would work as expected.

Cost Benefits

This greatly depends on your traffic. However, a very basic site with low traffic "should" be in the single digit dollar amounts. If you are passing a ton of content across the wire (again you are paying for what you use) you are benefiting from thousands of Amazon nodes vs. that poor little EC2 that probably wasn't load balanced to begin with. So it all greatly depends on what you doing, but if you want to reduce costs, this is a great way to do that and have peace of mind that it can scale when you do.

Anyone that ever broke out of the EC2 free tier for the first time due to a "Out of Memory" crash knows what I'm talking about. It's a ticking time bomb the next time your site gets popular enough to do it on the next EC2 up on the tier before you're spinning up 2 instances, or setting up a ELB/ALB and going beyond $50+ a month bill.

AWS CLI

I only mention this as a primer currently for CLI actions vs Transmit/Cyberduck.

You could mangage setting headers on files in S3 via CLI like -

aws s3 cp s3://s3-bucket/ s3://s3-bucket/ --metadata-directive REPLACE 
        --exclude "*" --include "*.jpg" --include "*.gif" \ --include "*.png"
        --recursive --cache-control max-age=86400
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment