Skip to content

Instantly share code, notes, and snippets.

@rbalicki2
Created April 21, 2018 20:35
Show Gist options
  • Save rbalicki2/30e8ee5fb5bc2018923a06c5ea5e3ea5 to your computer and use it in GitHub Desktop.
Save rbalicki2/30e8ee5fb5bc2018923a06c5ea5e3ea5 to your computer and use it in GitHub Desktop.

How we incorporate next and cloudfront (2018-04-21)

Feel free to contact me at robert.balicki@gmail.com or tweet at me @statisticsftw

This is a rough outline of how we utilize next.js and S3/Cloudfront. Hope it helps!

It assumes some knowledge of AWS.

Goals

  • Every html file should never, ever be cached. Right now, our home page (just the html) is 169kb, so it's not a big deal that we're not caching html files.
  • Every resource referenced by an html file, or referenced by another resource, e.g. /_next/88e8903c1dd55c7395ca294c82ca3ef0/app.js should contain a unique hash and be cached forever.
  • Every resource should be hashed based on its contents, so for example, /static/src/images/logo.svg/500f58bff3bac5e0623ac7b0ff8341f7.svg will be re-used across builds if the underlying svg does not change.
  • Every resource will be available forever (or, as long as your S3 bucket policy allows). This ensures that an old client who is visiting your website will never start to receive 404s when clicking around.
    • However, this old client will get the latest when refreshing. If you get rid of /about, the old client can still navigate to /about (because navigation in next does not cause full page refreshes).
  • There is a trade-off between inlining resources in HTML files or requesting them. Either they will be available more quickly for the initial render, or they will bloat every HTML file on every request. This is especially problematic for large files that are used across pages, e.g. fonts.
  • Rolling back to a particular commit should be trivial.

Directory structure: S3

  • In S3, we have the following directory structure:
/$NAMESPACE
           /current/[currently deployed html files]
           /builds/$GITHASH/[all the html files for that $GITHASH]
           /static/$ASSET_PATH/$ASSET_NAME.$ASSET_EXT/$ASSET_HASH.$ASSET_EXT
           /_next/[next stuff]
  • $NAMESPACE is like "production", "staging", etc. and it will be a constant in this gist.

Why do we have both a /static and a /_next folder?

These play the same role (serving static files). However, I next can be inscrutible, and sometimes it's best to have two folders and let next do its thing.

What is the deploy process?

Build step

next build
next export
# at this point, we have an out directory as follows
# out/[all html files]
# out/_next/[all next files]
# and we have a static directory as follows:
# static/$ASSET_PATH/$ASSET_NAME.$ASSET_EXT/$ASSET_HASH.$ASSET_EXT

# now, we move things around to work with the S3 config described above
# the goal is to be able to have this new directory structure reflect what
# we want in S3, so that we can do aws s3 cp ./out s3://$S3_BUCKET/$NAMESPACE/
# (even though, as you'll see, we don't exactly do that).
mv out/_next . # out/ contains only html files
mv out _out
mkdir -p out/builds/$GITHASH
mv _out/* out/builds/$GITHASH
rm -rf _out
mv _next out/

# our seo/ folder contains static things, like sitemap.xml, which are not
# managed by next.
cp seo/* out/builds/$GITHASH

Upload step

# copy _next and static folders, and make the files immutable
aws s3 cp ./out/_next s3://$S3_BUCKET/$NAMESPACE/_next \
  --cache-control immutable,max-age=100000000,public \
  --acl public-read \
  --recursive

aws s3 cp ./static/ s3://$S3_BUCKET/$NAMESPACE/static/ \
  --cache-control immutable,max-age=100000000,public \
  --acl public-read \
  --recursive

# copy the out/builds folder, and make the files never cached.
# NOTE: there is a bug in AWS. If you copy a file that has been
# uploaded as immutable using aws cp and try to modify its cache-control
# metadata, it will retain its old metadata. Hence, we can't just do
# aws s3 cp ./out s3://$S3_BUCKET/$NAMESPACE
aws s3 cp ./out/builds s3://$S3_BUCKET/$NAMESPACE/builds \
  --cache-control max-age=0,no-cache \
  --acl public-read \
  --recursive
  
# Now, we've uploaded out/builds/$GITHASH/about/index.html to
# builds/$GITHASH/about/index.html
# But, s3 is stupid. When you request /about (without the terminal slash),
# it will only look for /about (no extension). So, we need a separate step
# to upload the html files redundantly. :)
(cd out/builds &&
  find . -type f -name '*.html' | while read HTMLFILE; do
    HTMLFILESHORT=${HTMLFILE:2}
    HTMLFILE_WITHOUT_INDEX=${HTMLFILESHORT::${#HTMLFILESHORT}-11}

    # cp /about/index.html to /about
    aws s3 cp s3://$S3_BUCKET/$NAMESPACE/builds/${HTMLFILESHORT} \
      s3://$S3_BUCKET/$NAMESPACE/builds/$HTMLFILE_WITHOUT_INDEX

    if [ $? -ne 0 ]; then
      echo "***** Failed renaming build to $S3_BUCKET/$NAMESPACE (html)"
      exit 1
    fi
  done)

# locally, we can't have a file named about and a folder named about/ in the
# same directory. Hence, we have to do a lot of individual copies.
# This step takes up a lot of time, but there's not much else we can do.
#
# These files need Content-Type: text/html metadata, which they inherit from
# the original files.

Each of these copies is safe, because every file has a hash of some sort in it's path.

Deploy step

Deploying the site is simple:

aws s3 sync \
  s3://$S3_BUCKET/$NAMESPACE/builds/$GITHASH \
  s3://$S3_BUCKET/$NAMESPACE/current \
  --delete \
  --cache-control max-age=0,no-cache \
  --acl public-read

Sync the /builds/$GITHASH folder with /current. This will delete all files in /current and replace them with whatever's in /builds/$GITHASH.

(Obviously, rolling back just involves deploying a different $GITHASH.)

What are the AWS settings?

This is all one-time setup

S3

  • Enable static website hosting, with an index document of index.html and an error document of prod/current/404.
    • Note the URL, we'll refer to it later as $S3_URL
  • This means we will share the same 404 document across all namespaces. Not ideal!
  • Bucket policy
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "AddPerm",
            "Effect": "Allow",
            "Principal": "*",
            "Action": "s3:GetObject"
            "Resource": "arn:aws:s3:::my-bucket-name/*"
        }
    ]
}
  • CORS configuration (I'm not sure if this is necessary.)
<?xml version="1.0" encoding="UTF-8"?>
<CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<CORSRule>
   <AllowedOrigin>*</AllowedOrigin>
   <AllowedMethod>GET</AllowedMethod>
   <AllowedMethod>HEAD</AllowedMethod>
   <MaxAgeSeconds>3000</MaxAgeSeconds>
   <AllowedHeader>*</AllowedHeader>
</CORSRule>
</CORSConfiguration>
  • If you want to lock down this bucket further, by all means, do it.

ACM

  • Get an ACM certificate.

Cloudfront

  • We use cloudfront for two reasons: to provide SSL (https) and to gzip all of our files. We don't really care about anything else.
  • You will be creating one cloudfront web distribution per $NAMESPACE.
  • Create two origins:
    • $S3_URL/$NAMESPACE/
    • $S3_URL/$NAMESPACE/current
  • Create three behaviors, leaving all defaults except changing:
    • path pattern: _next*, origin: $NAMESPACE, view protocol policy: HTTPS Only, Cache based on selected request headers: Whitelist, whitelist headers: Origin, compress objects automatically: yes
    • path pattern: static*, origin: $NAMESPACE, view protocol policy: HTTPS Only, Cache based on selected request headers: Whitelist, whitelist headers: Origin, compress objects automatically: yes
    • path pattern: default (*), origin: $NAMESPACE/current, view protocol policy: Redirect HTTP to HTTPS, Cache based on selected request headers: Whitelist, whitelist headers: Origin, compress objects automatically: yes
  • Use the ACM certificate you used above.
  • Specify a CNAME: www.yoursite.com, yoursite.com

Route53

  • Cloudfront exposes an ugly URL. We use Route53 to map yoursite.com to d1231231231.cloudfront.net
  • In your hosted zone, create an alias record pointing www.yoursite.com to your cloudfront.
    • type: A - IPv4 Address
    • alias: yes
    • alias target: select your cloudfront

next configuration

In your next.config.js, have the following

    config.module.rules.push(
      {
        test: /\.(css|scss|svg)$/,
        exclude: /node_modules/,
        loader: 'emit-file-loader',
        options: {
          name: '[path][name].[ext]/[hash].[ext]',
        },
      },
      {
        test: /\.(svg|png|gif|mp4|jpg|otf|eot|ttf)$/,
        use: [
          'babel-loader',
          {
            loader: 'file-loader',
            options: {
              outputPath: '../static/',
              name: '[path][name].[ext]/[hash].[ext]',
              publicPath: '/static/',
            },
          },
        ],

In your babelrc, have the following

  "plugins": [
    [
      "babel-plugin-file-loader",
      {
        "extensions": ["otf", "eot", "woff", "woff2", "png", "jpg", "svg", "mp4", "gif", "ico"],
        "publicPath": "/static/",
        "outputPath": "static/",
        "name": "[path][name].[ext]/[hash].[ext]",
        "context": "babelrc"
      }
    ],

Note that this will output thing into static as static/_/$ASSET_PATH... and TBH, I'm not sure why, but it still works.

Local development

Hope that helps!

Feel free to contact me at robert.balicki@gmail.com or tweet at me @statisticsftw

@jnanendraveer
Copy link

Hi,
I am getting issue when run next js app on https in nginx
This site can’t provide a secure connection fitpass.dev sent an invalid response.
ERR_SSL_PROTOCOL_ERROR

@rbalicki2
Copy link
Author

@jnanendraveer it sounds like you're running the next app as a server (i.e. in the usual way). This guide doesn't apply if you're running next as a server. Feel free to DM me on twitter (@statisticsftw) and I can try to help you, though

@namishmudgal
Copy link

Hi there,
I have been using next js + S3+ cloudfront. Just by configuring correctly and exporting to S3, in turn exposing to cloudfront has no issues whatever. But the problem I am facing is big in terms of navigation, mostly with site refresh which is because next js, whether it is router push or next Link, it appends trailing slash at end of URL like this:
https://www.$$$.com/about-us/
https://www.$$$.com/share/
https: //www.$$$.com/share/?shareId=_svasaz12(instead of https: //www.$$$.com/share?shareId=_svasaz12)
This is big problem I have been facing and I have been looking for its fix everywhere I can.
Whenever user refreshes https://www.$$$.com/about-us , next js app behaves nicely and it remains to the same page.
But, whenever user refreshes https://www.$$$.com/about-us/ , next js app loses the context and loads the index.html or root content that particular page and strangely urls remains same (like https://www.$$$.com/about-us/).
Sadly, it is happening in every page which exists under pages folder inside our next js app.

P.S. Whenever I am deploying my application on s3, I am removing .html extension against each page except index.html to server page as static (after setting metadata content-type to text/html), which atleast makes my pages remains to the same content and doesn't load the root content, when I refresh page after removing trailing slash manually from address bar, but no luck for finding this problem's root cause.

@rbalicki2
Copy link
Author

Hi @namishmudgal,

I'm having some difficulty understanding your issue, so please clarify if I'm misunderstanding. I think your issue is that /foo and /foo/, when refreshed, give you incorrect behavior.

The way I work around your issue is to upload both /about and /about/index.html to S3. See the part of the code that has this comment:

# Now, we've uploaded out/builds/$GITHASH/about/index.html to
# builds/$GITHASH/about/index.html
# But, s3 is stupid. When you request /about (without the terminal slash),
# it will only look for /about (no extension). So, we need a separate step
# to upload the html files redundantly. :)

Feel free to DM me or to clarify here, and I can try to be more helpful.

@namishmudgal
Copy link

namishmudgal commented Oct 17, 2019

Thanks @rbalicki for prompt response.
I have tried the same thing you suggested earlier and now also but nothing seems to be helping me out. If I put a copy of index.html by creating and putting it in about folder in s3 (about/index.html), it crazily pops up the save as dialog box with 'download' name set in input box to save of that dialog, on refreshing the https://www.$$$.com/about-us/ url. So, this wierd things happens everytime whenever I refresh the page if I follow abaove steps as if s3 is not understanding the routes of App.
This is problem with Next Link/Router as it appends unecessary trailing slash after my pages url (except home page or index html page of site). For. e.g. :
<Link href={'/about'}>About</Link> or <button onClick={() => Router.push('/about')}>About</button> , in both cases tariling slash appends to s3 buck url and it gets like https://www.$$$.com/about-us/ instead https://www.$$$.com/about-us .
If you have any other suggestion around it, please feel free to comment. Thanks again.

@jimmdd
Copy link

jimmdd commented Nov 6, 2019

This gist didn't solve all the problem I had. I recommend to checkout serverless-next.js, the most elegant solution I ever found. Works with zero configuration needed. https://github.com/danielcondemarin/serverless-next.js/tree/master/packages/serverless-nextjs-component

@thelebster
Copy link

thelebster commented Jan 5, 2020

Hi there, if anyone interested, I use the following snippet for Bitbucket Pipelines to deploy nextjs site to s3:

pipelines:
  branches:
    master:
      - step:
          name: Build
          image: node:12-slim
          caches:
            - node
          script:
            - yarn install
            - yarn build
            - yarn export
          artifacts:
            - out/**
      - step:
          name: Deploy
          script:
            - pipe: atlassian/aws-s3-deploy:0.3.7
              variables:
                AWS_ACCESS_KEY_ID: $AWS_ACCESS_KEY_ID
                AWS_SECRET_ACCESS_KEY: $AWS_SECRET_ACCESS_KEY
                AWS_DEFAULT_REGION: $AWS_DEFAULT_REGION
                S3_BUCKET: $S3_BUCKET
                LOCAL_PATH: "out"
                DELETE_FLAG: "true"
                ACL: "public-read"
      - step:
          name: Fix routes
          image: atlassian/pipelines-awscli
          script:
            - export AWS_ACCESS_KEY_ID=$AWS_ACCESS_KEY_ID
            - export AWS_SECRET_ACCESS_KEY=$AWS_SECRET_ACCESS_KEY
            - export AWS_DEFAULT_REGION=$AWS_DEFAULT_REGION
            - export S3_BUCKET=$S3_BUCKET
            - find ./out -type f -name '*.html' | while read HTMLFILE; do
                BASENAME=${HTMLFILE##*/};
                FILENAME=${BASENAME%.*};
                if [[ "$FILENAME" != "index" ]];
                then
                  aws s3 cp s3://${S3_BUCKET}/${BASENAME} s3://${S3_BUCKET}/${FILENAME};
                fi
              done

This is my next.config.js file:

...
  exportTrailingSlash: false, // Does not make sense in case of using s3.
  exportPathMap: function() {
    return {
      '/': { page: '/' },
      '/home': { page: '/home' },
    };
  },
...

This is my index.js:

...
export default class Index extends Component {
  componentDidMount = () => {
    Router.push("/home");
  };

  render() {
    return <div />;
  }
}

It allows to open site directly via http://example.com/home. Hope this helps someone.

@webchi
Copy link

webchi commented Feb 4, 2020

How to deal with SSR?

@armenr
Copy link

armenr commented Feb 6, 2020

@webchi

How to deal with SSR?

SSR still happens - you split out the statically built assets and the server-rendered (dynamic) assets with the method in this Gist, and you prepend the CloudFront URL to the static assets via the nextConfig -->

module.exports = {
  // Use the CDN in production and localhost for development.
  assetPrefix: isProd ? 'https://cdn.mydomain.com' : '',
}

@TuuZzee
Copy link

TuuZzee commented May 7, 2020

Hi, this looks great!
I am wondering would this work with a dynamic routes? Especially on direct access or refresh.

@georgiosd
Copy link

Shouldn't the top of the build script have:

GITHASH=$(git rev-parse --short HEAD)

?

@Jaynam07
Copy link

Jaynam07 commented Apr 21, 2021

Getting error on deploying Next js Prerendered static pages on AWS S3

Issue - Page works if .html is appended to the URL but on removing .html from URL it shows an error

Example -
https.www.example.com/abc.html - this will work
https.www.example.com/abc- this is not working

Any solution or workaround?

@dickwyn
Copy link

dickwyn commented Apr 26, 2021

@Jaynam07 I managed to get it working by setting up a simple Lambda function + Cloudfront as mentioned in this blog post

https://sosnowski.dev/post/static-serverless-site-with-nextjs#lambda---edge-for-routing

@RishikeshDarandale
Copy link

RishikeshDarandale commented May 12, 2021

Getting error on deploying Next js Prerendered static pages on AWS S3

Issue - Page works if .html is appended to the URL but on removing .html from URL it shows an error

Example -
https.www.example.com/abc.html - this will work
https.www.example.com/abc- this is not working

Any solution or workaround?

@Jaynam07 You can use newly released cloudfront functions and here is sample code.

next.config.js

module.exports = {
  trailingSlash: true,
};

Note: If you are using static website hosting, then you do not need any function!

@longzheng
Copy link

longzheng commented Oct 11, 2021

I got static export paths working with CloudFront using a Lambda@Edge (origin request) function.

exports.handler = async (event) => {
    const eventRecord = event.Records[0];
    const request = eventRecord.cf.request;
    const uri = request.uri;
    // if URI includes ".", indicates file extension, return early and don't modify URI
    if (uri.includes('.')) {
        return request;
    }
    // if URI ends with "/" slash, then we need to remove the slash first before appending .html
    if (uri.endsWith('/')) {
        request.uri = request.uri.substring(0, request.uri.length - 1);
    }
    request.uri += '.html';
    return request;
};

My next.config.js is using no trailing slash

module.exports = {
  trailingSlash: false,
};

@Pelicer
Copy link

Pelicer commented Nov 24, 2021

Hey! Great gist. I' having a similar problem. My issue, however, relates to dynamic routing of a NextJS application inside CloudFront. How to translate dns.com/path/[id] to a way that CloudFront understands. I have my question with further details in this SO link: https://stackoverflow.com/questions/70096145/nextjs-dynamic-routing-in-amazon-cloudfront

@longzheng
Copy link

longzheng commented Nov 24, 2021

Hey! Great gist. I' having a similar problem. My issue, however, relates to dynamic routing of a NextJS application inside CloudFront. How to translate dns.com/path/[id] to a way that CloudFront understands. I have my question with further details in this SO link: https://stackoverflow.com/questions/70096145/nextjs-dynamic-routing-in-amazon-cloudfront

I prepend my dynamic routes to the CloudFront Lambda@Edge (origin request) function to handle dynamic routes too.

export const handler: CloudFrontRequestHandler = async (event) => {
    const eventRecord = event.Records[0];
    const request = eventRecord.cf.request;
    const uri = request.uri;

    // handle /posts/[id] dynamic route
    if (uri === '/posts' || uri.startsWith('/posts/')) {
        request.uri = "/posts/[id].html";
        return request;
    }
    
    // if URI includes ".", indicates file extension, return early and don't modify URI
    if (uri.includes('.')) {
        return request;
    }

    // if URI ends with "/" slash, then we need to remove the slash first before appending .html
    if (uri.endsWith('/')) {
        request.uri = request.uri.substring(0, request.uri.length - 1);
    }

    request.uri += '.html';
    return request;
};

@nick-kang
Copy link

@sladg
Copy link

sladg commented Oct 21, 2022

FYI - alternative in case you cannot statically export and don't like @edge functions.
https://github.com/sladg/nextjs-lambda

Also, thanks for the inspiration, I'm considering adding static-only functionality :)

@linda-benboudiaf
Copy link

Hi there,

is that any issue on when you click refresh on the browser?

I tested when with the s3 endpoint, it is no issue, but with the cloudfront, it will not able to reach the correct page

Did you solve this problem I got the same !! it drives me crazy !

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment