Because I've done this so many times in my life but always seem to forget how.
This approach WILL:
- Redirect naked domain to www domain in all conditions
- Redirect http to https in all conditions
- Preserve URL paths and queries. So http://mydomain.com/earth/africa ultimately leads to https://www.mydomain.com/earth/africa
Downsides:
- HTTP/Naked URL does two redirects. First is HTTPS/Naked, followed by HTTPS/WWW
S3 Buckets:
- Public bucket for static files
- Empty bucket for naked domain to www redirection
Cloudfront Distributions:
- Static content distribution for www
- http -> https naked domain redirection
DNS Records:
- One A ALIAS record at www to Cloudfront Distrubution #1
- One A ALIAS record at apex to Cloudfront Distribution #2
Create certificates in AWS for mydomain.com
and www.mydomain.com
(or mydomain.com
and *.mydomain.com
). If you want, these can all be in one certiciate
The general idea is, get your www
to work for http and https first. Then get the apex domain to redirect there under http and https.
- The bucket can have any name, but its easiest if you use
www.mydomain.com
- Add public bucket policy for
www.mydomain.com
. It's in the Permissions tab. Note that the line that says Resource should reflect your public bucket's name.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "PublicReadGetObject",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::PUBLIC_BUCKET_NAME_HERE/*"
}
]
}
- Enable static hosting.
Select "Host a static website"
index.html
for the default document
- Origin Domain: choose the www bucket from the dropdown. AWS will now notify you that you should "use website endpoint". Do that. Ultimately it should look something like this:
www.mydomain.com.s3.us-west-1.amazonaws.com
- Protocol: HTTP. This is the protocol it will be accessing S3 over, and there's no need to use SSL here.
- Name: whatever you want.
- Viewer protocol policy: Redirect HTTP to HTTPS
- Alternate domain name (CNAME): www.mydomain.com
- Custom SSL certificate: ^ The one for this domain
Create! Wait a bit.
- Goto (or create) your zone in Route 53
- Create Record
- Record name: www
- Record type: A
- Alias: YES
- Route traffic to: Cloudfront
- Choose distribution: The distribution from Step #2, for www.mydomain.com
Visit www.mydomain.com in a browser, see if it works.
Also make sure that http redirects to https
curl -v http://www.mydomain.com
should show HTTP/1.1 301 Moved Permanently
to Location: https://www.mydomain.com/
Pro-tip: Clear your DNS cache. In Mac OS (at least in May 2023): sudo dscacheutil -flushcache; sudo killall -HUP mDNSResponder
- Create a new bucket. The bucket can have any name, but its easiest if you use
mydomain.com
- Keep it empty
- Enable Static Hosting:
- Hosting type: Redirect requests for an object
- Protocol: None
- You might need to turn off the "Block all public access" safeguards here. Can't remember
- Create a new Cloudfront Distribution
- Origin Domain: choose the naked-domain bucket from the dropdown. AWS will now notify you that you should "use website endpoint". Do that. Ultimately it should look something like this:
mydomain.com.s3.us-west-1.amazonaws.com
- Name: Whatever you want. "mydomain.com" is fine
- Protocol: HTTP only
- Viewer protocol policy: Redirect HTTP to HTTPS
- Alternate domain name (CNAME): mydomain.com
- Custom SSL certificate: ^ the one for mydomain.com
Create! Wait a bit.
- Record Name: blank
- Record type: A
- Alias: YES
- Route traffic to: Cloudfront
- Choose distribution: The one you made in step 5.
Clear DNS cache: sudo dscacheutil -flushcache; sudo killall -HUP mDNSResponder
Visit mydomain.com in a browser, see if it works.
Check ALL protocols/domains and make sure they end up at HTTPS/WWW:
curl -v http://mydomain.com
curl -v http://www.mydomain.com
curl -v https://mydomain.com
Breath a sight of relief
.github/workflows/deploy-prod.yml
name: Production Deploy
on:
push:
branches:
- production
workflow_dispatch:
jobs:
deploy:
runs-on: ubuntu-latest
permissions:
id-token: write
contents: read
steps:
- name: Configure AWS Credentials
uses: aws-actions/configure-aws-credentials@v2
environment: production
with:
audience: ${{ vars.AWS_AUDIENCE }}
role-to-assume: ${{ vars.AWS_ROLE }}
aws-region: us-west-1
- name: Checkout
uses: actions/checkout@v3
- name: Copy files to S3 Content bucket
run: |
aws s3 sync dist/ s3://${{ vars.AWS_S3_CONTENT_BUCKET }}
- name: Clear Cloudfront cache
run: |
aws cloudfront create-invalidation --distribution-id ${{ vars.AWS_CLOUDFRONT_CONTENT_DISTRIBUTION_ID }} --paths "/*"
To get it to work:
- Set up an OIDC connector from Github to AWS. Easier than it sounds. Good instructions here
- Create an "environment" in Github called "production"
- Set the 4 vars needed. They aren't secrets, the OIDC connector handles keys / roles