Feel free to contact me at robert.balicki@gmail.com or tweet at me @statisticsftw
This is a rough outline of how we utilize next.js and S3/Cloudfront. Hope it helps!
It assumes some knowledge of AWS.
- Every html file should never, ever be cached. Right now, our home page (just the html) is 169kb, so it's not a big deal that we're not caching html files.
- Every resource referenced by an html file, or referenced by another resource, e.g.
/_next/88e8903c1dd55c7395ca294c82ca3ef0/app.js
should contain a unique hash and be cached forever. - Every resource should be hashed based on its contents, so for example,
/static/src/images/logo.svg/500f58bff3bac5e0623ac7b0ff8341f7.svg
will be re-used across builds if the underlying svg does not change. - Every resource will be available forever (or, as long as your S3 bucket policy allows).
This ensures that an old client who is visiting your website will never start to
receive 404s when clicking around.
- However, this old client will get the latest when refreshing. If you get rid of /about, the old client can still navigate to /about (because navigation in next does not cause full page refreshes).
- There is a trade-off between inlining resources in HTML files or requesting them. Either they will be available more quickly for the initial render, or they will bloat every HTML file on every request. This is especially problematic for large files that are used across pages, e.g. fonts.
- Rolling back to a particular commit should be trivial.
- In S3, we have the following directory structure:
/$NAMESPACE
/current/[currently deployed html files]
/builds/$GITHASH/[all the html files for that $GITHASH]
/static/$ASSET_PATH/$ASSET_NAME.$ASSET_EXT/$ASSET_HASH.$ASSET_EXT
/_next/[next stuff]
$NAMESPACE
is like "production", "staging", etc. and it will be a constant in this gist.
Why do we have both a
/static
and a/_next
folder?These play the same role (serving static files). However, I next can be inscrutible, and sometimes it's best to have two folders and let next do its thing.
next build
next export
# at this point, we have an out directory as follows
# out/[all html files]
# out/_next/[all next files]
# and we have a static directory as follows:
# static/$ASSET_PATH/$ASSET_NAME.$ASSET_EXT/$ASSET_HASH.$ASSET_EXT
# now, we move things around to work with the S3 config described above
# the goal is to be able to have this new directory structure reflect what
# we want in S3, so that we can do aws s3 cp ./out s3://$S3_BUCKET/$NAMESPACE/
# (even though, as you'll see, we don't exactly do that).
mv out/_next . # out/ contains only html files
mv out _out
mkdir -p out/builds/$GITHASH
mv _out/* out/builds/$GITHASH
rm -rf _out
mv _next out/
# our seo/ folder contains static things, like sitemap.xml, which are not
# managed by next.
cp seo/* out/builds/$GITHASH
# copy _next and static folders, and make the files immutable
aws s3 cp ./out/_next s3://$S3_BUCKET/$NAMESPACE/_next \
--cache-control immutable,max-age=100000000,public \
--acl public-read \
--recursive
aws s3 cp ./static/ s3://$S3_BUCKET/$NAMESPACE/static/ \
--cache-control immutable,max-age=100000000,public \
--acl public-read \
--recursive
# copy the out/builds folder, and make the files never cached.
# NOTE: there is a bug in AWS. If you copy a file that has been
# uploaded as immutable using aws cp and try to modify its cache-control
# metadata, it will retain its old metadata. Hence, we can't just do
# aws s3 cp ./out s3://$S3_BUCKET/$NAMESPACE
aws s3 cp ./out/builds s3://$S3_BUCKET/$NAMESPACE/builds \
--cache-control max-age=0,no-cache \
--acl public-read \
--recursive
# Now, we've uploaded out/builds/$GITHASH/about/index.html to
# builds/$GITHASH/about/index.html
# But, s3 is stupid. When you request /about (without the terminal slash),
# it will only look for /about (no extension). So, we need a separate step
# to upload the html files redundantly. :)
(cd out/builds &&
find . -type f -name '*.html' | while read HTMLFILE; do
HTMLFILESHORT=${HTMLFILE:2}
HTMLFILE_WITHOUT_INDEX=${HTMLFILESHORT::${#HTMLFILESHORT}-11}
# cp /about/index.html to /about
aws s3 cp s3://$S3_BUCKET/$NAMESPACE/builds/${HTMLFILESHORT} \
s3://$S3_BUCKET/$NAMESPACE/builds/$HTMLFILE_WITHOUT_INDEX
if [ $? -ne 0 ]; then
echo "***** Failed renaming build to $S3_BUCKET/$NAMESPACE (html)"
exit 1
fi
done)
# locally, we can't have a file named about and a folder named about/ in the
# same directory. Hence, we have to do a lot of individual copies.
# This step takes up a lot of time, but there's not much else we can do.
#
# These files need Content-Type: text/html metadata, which they inherit from
# the original files.
Each of these copies is safe, because every file has a hash of some sort in it's path.
Deploying the site is simple:
aws s3 sync \
s3://$S3_BUCKET/$NAMESPACE/builds/$GITHASH \
s3://$S3_BUCKET/$NAMESPACE/current \
--delete \
--cache-control max-age=0,no-cache \
--acl public-read
Sync the /builds/$GITHASH
folder with /current
. This will delete all files in /current
and replace them with whatever's in /builds/$GITHASH
.
(Obviously, rolling back just involves deploying a different $GITHASH.)
This is all one-time setup
- Enable static website hosting, with an index document of
index.html
and an error document ofprod/current/404
.- Note the URL, we'll refer to it later as $S3_URL
- This means we will share the same 404 document across all namespaces. Not ideal!
- Bucket policy
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AddPerm",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject"
"Resource": "arn:aws:s3:::my-bucket-name/*"
}
]
}
- CORS configuration (I'm not sure if this is necessary.)
<?xml version="1.0" encoding="UTF-8"?>
<CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<CORSRule>
<AllowedOrigin>*</AllowedOrigin>
<AllowedMethod>GET</AllowedMethod>
<AllowedMethod>HEAD</AllowedMethod>
<MaxAgeSeconds>3000</MaxAgeSeconds>
<AllowedHeader>*</AllowedHeader>
</CORSRule>
</CORSConfiguration>
- If you want to lock down this bucket further, by all means, do it.
- Get an ACM certificate.
- We use cloudfront for two reasons: to provide SSL (https) and to gzip all of our files. We don't really care about anything else.
- You will be creating one cloudfront web distribution per $NAMESPACE.
- Create two origins:
- $S3_URL/$NAMESPACE/
- $S3_URL/$NAMESPACE/current
- Create three behaviors, leaving all defaults except changing:
- path pattern:
_next*
, origin:$NAMESPACE
, view protocol policy:HTTPS Only
, Cache based on selected request headers:Whitelist
, whitelist headers:Origin
, compress objects automatically:yes
- path pattern:
static*
, origin:$NAMESPACE
, view protocol policy:HTTPS Only
, Cache based on selected request headers:Whitelist
, whitelist headers:Origin
, compress objects automatically:yes
- path pattern:
default (*)
, origin:$NAMESPACE/current
, view protocol policy:Redirect HTTP to HTTPS
, Cache based on selected request headers:Whitelist
, whitelist headers:Origin
, compress objects automatically:yes
- path pattern:
- Use the ACM certificate you used above.
- Specify a CNAME:
www.yoursite.com, yoursite.com
- Cloudfront exposes an ugly URL. We use Route53 to map
yoursite.com
tod1231231231.cloudfront.net
- In your hosted zone, create an alias record pointing www.yoursite.com to your cloudfront.
- type: A - IPv4 Address
- alias: yes
- alias target: select your cloudfront
In your next.config.js, have the following
config.module.rules.push(
{
test: /\.(css|scss|svg)$/,
exclude: /node_modules/,
loader: 'emit-file-loader',
options: {
name: '[path][name].[ext]/[hash].[ext]',
},
},
{
test: /\.(svg|png|gif|mp4|jpg|otf|eot|ttf)$/,
use: [
'babel-loader',
{
loader: 'file-loader',
options: {
outputPath: '../static/',
name: '[path][name].[ext]/[hash].[ext]',
publicPath: '/static/',
},
},
],
In your babelrc, have the following
"plugins": [
[
"babel-plugin-file-loader",
{
"extensions": ["otf", "eot", "woff", "woff2", "png", "jpg", "svg", "mp4", "gif", "ico"],
"publicPath": "/static/",
"outputPath": "static/",
"name": "[path][name].[ext]/[hash].[ext]",
"context": "babelrc"
}
],
Note that this will output thing into static as static/_/$ASSET_PATH...
and TBH, I'm not sure why, but it still works.
- Files are served locally using https://github.com/zeit/next.js/wiki/Centralizing-Routing
- We modified the above to proxy requests to a different server when the user requests
/app/whatever
, which is done through another cloudfront behavior when deployed.
- We modified the above to proxy requests to a different server when the user requests
Hope that helps!
Feel free to contact me at robert.balicki@gmail.com or tweet at me @statisticsftw
Hi there, if anyone interested, I use the following snippet for Bitbucket Pipelines to deploy nextjs site to s3:
This is my next.config.js file:
This is my index.js:
It allows to open site directly via http://example.com/home. Hope this helps someone.