Skip to content

Instantly share code, notes, and snippets.

@rbalicki2
Created April 21, 2018 20:35
Star You must be signed in to star a gist
Save rbalicki2/30e8ee5fb5bc2018923a06c5ea5e3ea5 to your computer and use it in GitHub Desktop.

How we incorporate next and cloudfront (2018-04-21)

Feel free to contact me at robert.balicki@gmail.com or tweet at me @statisticsftw

This is a rough outline of how we utilize next.js and S3/Cloudfront. Hope it helps!

It assumes some knowledge of AWS.

Goals

  • Every html file should never, ever be cached. Right now, our home page (just the html) is 169kb, so it's not a big deal that we're not caching html files.
  • Every resource referenced by an html file, or referenced by another resource, e.g. /_next/88e8903c1dd55c7395ca294c82ca3ef0/app.js should contain a unique hash and be cached forever.
  • Every resource should be hashed based on its contents, so for example, /static/src/images/logo.svg/500f58bff3bac5e0623ac7b0ff8341f7.svg will be re-used across builds if the underlying svg does not change.
  • Every resource will be available forever (or, as long as your S3 bucket policy allows). This ensures that an old client who is visiting your website will never start to receive 404s when clicking around.
    • However, this old client will get the latest when refreshing. If you get rid of /about, the old client can still navigate to /about (because navigation in next does not cause full page refreshes).
  • There is a trade-off between inlining resources in HTML files or requesting them. Either they will be available more quickly for the initial render, or they will bloat every HTML file on every request. This is especially problematic for large files that are used across pages, e.g. fonts.
  • Rolling back to a particular commit should be trivial.

Directory structure: S3

  • In S3, we have the following directory structure:
/$NAMESPACE
           /current/[currently deployed html files]
           /builds/$GITHASH/[all the html files for that $GITHASH]
           /static/$ASSET_PATH/$ASSET_NAME.$ASSET_EXT/$ASSET_HASH.$ASSET_EXT
           /_next/[next stuff]
  • $NAMESPACE is like "production", "staging", etc. and it will be a constant in this gist.

Why do we have both a /static and a /_next folder?

These play the same role (serving static files). However, I next can be inscrutible, and sometimes it's best to have two folders and let next do its thing.

What is the deploy process?

Build step

next build
next export
# at this point, we have an out directory as follows
# out/[all html files]
# out/_next/[all next files]
# and we have a static directory as follows:
# static/$ASSET_PATH/$ASSET_NAME.$ASSET_EXT/$ASSET_HASH.$ASSET_EXT

# now, we move things around to work with the S3 config described above
# the goal is to be able to have this new directory structure reflect what
# we want in S3, so that we can do aws s3 cp ./out s3://$S3_BUCKET/$NAMESPACE/
# (even though, as you'll see, we don't exactly do that).
mv out/_next . # out/ contains only html files
mv out _out
mkdir -p out/builds/$GITHASH
mv _out/* out/builds/$GITHASH
rm -rf _out
mv _next out/

# our seo/ folder contains static things, like sitemap.xml, which are not
# managed by next.
cp seo/* out/builds/$GITHASH

Upload step

# copy _next and static folders, and make the files immutable
aws s3 cp ./out/_next s3://$S3_BUCKET/$NAMESPACE/_next \
  --cache-control immutable,max-age=100000000,public \
  --acl public-read \
  --recursive

aws s3 cp ./static/ s3://$S3_BUCKET/$NAMESPACE/static/ \
  --cache-control immutable,max-age=100000000,public \
  --acl public-read \
  --recursive

# copy the out/builds folder, and make the files never cached.
# NOTE: there is a bug in AWS. If you copy a file that has been
# uploaded as immutable using aws cp and try to modify its cache-control
# metadata, it will retain its old metadata. Hence, we can't just do
# aws s3 cp ./out s3://$S3_BUCKET/$NAMESPACE
aws s3 cp ./out/builds s3://$S3_BUCKET/$NAMESPACE/builds \
  --cache-control max-age=0,no-cache \
  --acl public-read \
  --recursive
  
# Now, we've uploaded out/builds/$GITHASH/about/index.html to
# builds/$GITHASH/about/index.html
# But, s3 is stupid. When you request /about (without the terminal slash),
# it will only look for /about (no extension). So, we need a separate step
# to upload the html files redundantly. :)
(cd out/builds &&
  find . -type f -name '*.html' | while read HTMLFILE; do
    HTMLFILESHORT=${HTMLFILE:2}
    HTMLFILE_WITHOUT_INDEX=${HTMLFILESHORT::${#HTMLFILESHORT}-11}

    # cp /about/index.html to /about
    aws s3 cp s3://$S3_BUCKET/$NAMESPACE/builds/${HTMLFILESHORT} \
      s3://$S3_BUCKET/$NAMESPACE/builds/$HTMLFILE_WITHOUT_INDEX

    if [ $? -ne 0 ]; then
      echo "***** Failed renaming build to $S3_BUCKET/$NAMESPACE (html)"
      exit 1
    fi
  done)

# locally, we can't have a file named about and a folder named about/ in the
# same directory. Hence, we have to do a lot of individual copies.
# This step takes up a lot of time, but there's not much else we can do.
#
# These files need Content-Type: text/html metadata, which they inherit from
# the original files.

Each of these copies is safe, because every file has a hash of some sort in it's path.

Deploy step

Deploying the site is simple:

aws s3 sync \
  s3://$S3_BUCKET/$NAMESPACE/builds/$GITHASH \
  s3://$S3_BUCKET/$NAMESPACE/current \
  --delete \
  --cache-control max-age=0,no-cache \
  --acl public-read

Sync the /builds/$GITHASH folder with /current. This will delete all files in /current and replace them with whatever's in /builds/$GITHASH.

(Obviously, rolling back just involves deploying a different $GITHASH.)

What are the AWS settings?

This is all one-time setup

S3

  • Enable static website hosting, with an index document of index.html and an error document of prod/current/404.
    • Note the URL, we'll refer to it later as $S3_URL
  • This means we will share the same 404 document across all namespaces. Not ideal!
  • Bucket policy
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "AddPerm",
            "Effect": "Allow",
            "Principal": "*",
            "Action": "s3:GetObject"
            "Resource": "arn:aws:s3:::my-bucket-name/*"
        }
    ]
}
  • CORS configuration (I'm not sure if this is necessary.)
<?xml version="1.0" encoding="UTF-8"?>
<CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<CORSRule>
   <AllowedOrigin>*</AllowedOrigin>
   <AllowedMethod>GET</AllowedMethod>
   <AllowedMethod>HEAD</AllowedMethod>
   <MaxAgeSeconds>3000</MaxAgeSeconds>
   <AllowedHeader>*</AllowedHeader>
</CORSRule>
</CORSConfiguration>
  • If you want to lock down this bucket further, by all means, do it.

ACM

  • Get an ACM certificate.

Cloudfront

  • We use cloudfront for two reasons: to provide SSL (https) and to gzip all of our files. We don't really care about anything else.
  • You will be creating one cloudfront web distribution per $NAMESPACE.
  • Create two origins:
    • $S3_URL/$NAMESPACE/
    • $S3_URL/$NAMESPACE/current
  • Create three behaviors, leaving all defaults except changing:
    • path pattern: _next*, origin: $NAMESPACE, view protocol policy: HTTPS Only, Cache based on selected request headers: Whitelist, whitelist headers: Origin, compress objects automatically: yes
    • path pattern: static*, origin: $NAMESPACE, view protocol policy: HTTPS Only, Cache based on selected request headers: Whitelist, whitelist headers: Origin, compress objects automatically: yes
    • path pattern: default (*), origin: $NAMESPACE/current, view protocol policy: Redirect HTTP to HTTPS, Cache based on selected request headers: Whitelist, whitelist headers: Origin, compress objects automatically: yes
  • Use the ACM certificate you used above.
  • Specify a CNAME: www.yoursite.com, yoursite.com

Route53

  • Cloudfront exposes an ugly URL. We use Route53 to map yoursite.com to d1231231231.cloudfront.net
  • In your hosted zone, create an alias record pointing www.yoursite.com to your cloudfront.
    • type: A - IPv4 Address
    • alias: yes
    • alias target: select your cloudfront

next configuration

In your next.config.js, have the following

    config.module.rules.push(
      {
        test: /\.(css|scss|svg)$/,
        exclude: /node_modules/,
        loader: 'emit-file-loader',
        options: {
          name: '[path][name].[ext]/[hash].[ext]',
        },
      },
      {
        test: /\.(svg|png|gif|mp4|jpg|otf|eot|ttf)$/,
        use: [
          'babel-loader',
          {
            loader: 'file-loader',
            options: {
              outputPath: '../static/',
              name: '[path][name].[ext]/[hash].[ext]',
              publicPath: '/static/',
            },
          },
        ],

In your babelrc, have the following

  "plugins": [
    [
      "babel-plugin-file-loader",
      {
        "extensions": ["otf", "eot", "woff", "woff2", "png", "jpg", "svg", "mp4", "gif", "ico"],
        "publicPath": "/static/",
        "outputPath": "static/",
        "name": "[path][name].[ext]/[hash].[ext]",
        "context": "babelrc"
      }
    ],

Note that this will output thing into static as static/_/$ASSET_PATH... and TBH, I'm not sure why, but it still works.

Local development

Hope that helps!

Feel free to contact me at robert.balicki@gmail.com or tweet at me @statisticsftw

@thelebster
Copy link

thelebster commented Jan 5, 2020

Hi there, if anyone interested, I use the following snippet for Bitbucket Pipelines to deploy nextjs site to s3:

pipelines:
  branches:
    master:
      - step:
          name: Build
          image: node:12-slim
          caches:
            - node
          script:
            - yarn install
            - yarn build
            - yarn export
          artifacts:
            - out/**
      - step:
          name: Deploy
          script:
            - pipe: atlassian/aws-s3-deploy:0.3.7
              variables:
                AWS_ACCESS_KEY_ID: $AWS_ACCESS_KEY_ID
                AWS_SECRET_ACCESS_KEY: $AWS_SECRET_ACCESS_KEY
                AWS_DEFAULT_REGION: $AWS_DEFAULT_REGION
                S3_BUCKET: $S3_BUCKET
                LOCAL_PATH: "out"
                DELETE_FLAG: "true"
                ACL: "public-read"
      - step:
          name: Fix routes
          image: atlassian/pipelines-awscli
          script:
            - export AWS_ACCESS_KEY_ID=$AWS_ACCESS_KEY_ID
            - export AWS_SECRET_ACCESS_KEY=$AWS_SECRET_ACCESS_KEY
            - export AWS_DEFAULT_REGION=$AWS_DEFAULT_REGION
            - export S3_BUCKET=$S3_BUCKET
            - find ./out -type f -name '*.html' | while read HTMLFILE; do
                BASENAME=${HTMLFILE##*/};
                FILENAME=${BASENAME%.*};
                if [[ "$FILENAME" != "index" ]];
                then
                  aws s3 cp s3://${S3_BUCKET}/${BASENAME} s3://${S3_BUCKET}/${FILENAME};
                fi
              done

This is my next.config.js file:

...
  exportTrailingSlash: false, // Does not make sense in case of using s3.
  exportPathMap: function() {
    return {
      '/': { page: '/' },
      '/home': { page: '/home' },
    };
  },
...

This is my index.js:

...
export default class Index extends Component {
  componentDidMount = () => {
    Router.push("/home");
  };

  render() {
    return <div />;
  }
}

It allows to open site directly via http://example.com/home. Hope this helps someone.

@webchi
Copy link

webchi commented Feb 4, 2020

How to deal with SSR?

@armenr
Copy link

armenr commented Feb 6, 2020

@webchi

How to deal with SSR?

SSR still happens - you split out the statically built assets and the server-rendered (dynamic) assets with the method in this Gist, and you prepend the CloudFront URL to the static assets via the nextConfig -->

module.exports = {
  // Use the CDN in production and localhost for development.
  assetPrefix: isProd ? 'https://cdn.mydomain.com' : '',
}

@TuuZzee
Copy link

TuuZzee commented May 7, 2020

Hi, this looks great!
I am wondering would this work with a dynamic routes? Especially on direct access or refresh.

@georgiosd
Copy link

Shouldn't the top of the build script have:

GITHASH=$(git rev-parse --short HEAD)

?

@Jaynam07
Copy link

Jaynam07 commented Apr 21, 2021

Getting error on deploying Next js Prerendered static pages on AWS S3

Issue - Page works if .html is appended to the URL but on removing .html from URL it shows an error

Example -
https.www.example.com/abc.html - this will work
https.www.example.com/abc- this is not working

Any solution or workaround?

@dickwyn
Copy link

dickwyn commented Apr 26, 2021

@Jaynam07 I managed to get it working by setting up a simple Lambda function + Cloudfront as mentioned in this blog post

https://sosnowski.dev/post/static-serverless-site-with-nextjs#lambda---edge-for-routing

@RishikeshDarandale
Copy link

RishikeshDarandale commented May 12, 2021

Getting error on deploying Next js Prerendered static pages on AWS S3

Issue - Page works if .html is appended to the URL but on removing .html from URL it shows an error

Example -
https.www.example.com/abc.html - this will work
https.www.example.com/abc- this is not working

Any solution or workaround?

@Jaynam07 You can use newly released cloudfront functions and here is sample code.

next.config.js

module.exports = {
  trailingSlash: true,
};

Note: If you are using static website hosting, then you do not need any function!

@longzheng
Copy link

longzheng commented Oct 11, 2021

I got static export paths working with CloudFront using a Lambda@Edge (origin request) function.

exports.handler = async (event) => {
    const eventRecord = event.Records[0];
    const request = eventRecord.cf.request;
    const uri = request.uri;
    // if URI includes ".", indicates file extension, return early and don't modify URI
    if (uri.includes('.')) {
        return request;
    }
    // if URI ends with "/" slash, then we need to remove the slash first before appending .html
    if (uri.endsWith('/')) {
        request.uri = request.uri.substring(0, request.uri.length - 1);
    }
    request.uri += '.html';
    return request;
};

My next.config.js is using no trailing slash

module.exports = {
  trailingSlash: false,
};

@Pelicer
Copy link

Pelicer commented Nov 24, 2021

Hey! Great gist. I' having a similar problem. My issue, however, relates to dynamic routing of a NextJS application inside CloudFront. How to translate dns.com/path/[id] to a way that CloudFront understands. I have my question with further details in this SO link: https://stackoverflow.com/questions/70096145/nextjs-dynamic-routing-in-amazon-cloudfront

@longzheng
Copy link

longzheng commented Nov 24, 2021

Hey! Great gist. I' having a similar problem. My issue, however, relates to dynamic routing of a NextJS application inside CloudFront. How to translate dns.com/path/[id] to a way that CloudFront understands. I have my question with further details in this SO link: https://stackoverflow.com/questions/70096145/nextjs-dynamic-routing-in-amazon-cloudfront

I prepend my dynamic routes to the CloudFront Lambda@Edge (origin request) function to handle dynamic routes too.

export const handler: CloudFrontRequestHandler = async (event) => {
    const eventRecord = event.Records[0];
    const request = eventRecord.cf.request;
    const uri = request.uri;

    // handle /posts/[id] dynamic route
    if (uri === '/posts' || uri.startsWith('/posts/')) {
        request.uri = "/posts/[id].html";
        return request;
    }
    
    // if URI includes ".", indicates file extension, return early and don't modify URI
    if (uri.includes('.')) {
        return request;
    }

    // if URI ends with "/" slash, then we need to remove the slash first before appending .html
    if (uri.endsWith('/')) {
        request.uri = request.uri.substring(0, request.uri.length - 1);
    }

    request.uri += '.html';
    return request;
};

@nick-kang
Copy link

@sladg
Copy link

sladg commented Oct 21, 2022

FYI - alternative in case you cannot statically export and don't like @edge functions.
https://github.com/sladg/nextjs-lambda

Also, thanks for the inspiration, I'm considering adding static-only functionality :)

@linda-benboudiaf
Copy link

Hi there,

is that any issue on when you click refresh on the browser?

I tested when with the s3 endpoint, it is no issue, but with the cloudfront, it will not able to reach the correct page

Did you solve this problem I got the same !! it drives me crazy !

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment