This document assumes you already have Docker installed and likewise are familiar with its use-cases. In essence, Docker is used to package our applications as standardized executable components (containers) that combine application source code with all the operating system libraries and dependencies required to run the code in any (typically cloud) environment(s).
There are many benefits to this, and there are likewise many caveats. Refer to Google to learn more.
You should already know how to create a simple Node app with a single endpoint, but if you don't...
-
Create a new directory named
node-test
, runnpm init -y
, followed bynpm i --save express
-
Create an
app.js
file, and in it add the following:const app = require('express')(); app.get('/', (req, res) => { res.send('Hola, Mundo!\n'); }); app.listen(3000, '0.0.0.0');
-
We can ensure this works by running
node app.js
from the root directory, and making a GET request tolocalhost:3000
via your browser of choice, Postman, Paw, or good ol'curl
.
We will issue this command in subsequent sections via other tools as it will initialize our application in other environments. You can kill the app as we won't really need it running locally any longer.
-
In the root of the project directory, create a file with no extension named
Dockerfile
and add the following:FROM node:10 WORKDIR /usr/src/app COPY package*.json ./ RUN npm i COPY . . EXPOSE 3000 CMD [ "node", "app.js" ]
FROM
specifies the runtimeWORKDIR
configures where any subsequent commands within the Docker file will runCOPY
instructs Docker to copy the package (and package.lock) files from the directory to the directory in the second argument- To be clear, the first argument is relative to the current directory where Docker is being run. The second argument is relative to the
WORKDIR
specified prior - So in this case, copy the package files from the dir (presumably
/home/user/
on Ubuntu)
- To be clear, the first argument is relative to the current directory where Docker is being run. The second argument is relative to the
RUN
runs the specified commandCOPY
copies over all files from the current directory, minus anything that might be inside.dockerignore
EXPOSE
by default Docker sandboxes what's ran inside the container. This exposes port3000
, which is in this case is the port assigned to the Node app.CMD
instructs Docker to run those strings as commands, note that this is an array
-
Next, add a
.dockerignore
file just in case so it ignoresnode_modules
. TheRUN
command in thedockerfile
handles all this for us, as it already runsnpm i
in theWORKDIR
:node_modules npm-debug.log
The next few steps will create a Docker image for the node-test
app.
- From the root dir of the project, run:
docker build -t node-test .
- Where these are actually stored keeps changing, so if you're interested, just Google it.
- The
-t
flag... - The second arg is the name of the newly created container
- The third arg is the target directory, meaning what is to be containerized
- Next run
docker run -p 12345:3000 -d node-test
- This should be straight forward, but the
-p
flag is used for port-forwarding. This command tells docker to forward its port 3000, to your local port 12345.
- This should be straight forward, but the
And with that, the Docker image should run locally.
To stop the container run docker ps
to see a list of running containers, then run docker kill [process_name]
or use the Docker Desktop app you likely already have installed if you've followed the README banana-stand
repo and you prefer a GUI.
For now, let's kill the app with docker kill node-test
as we don't really need it running for the next section.
In order to create an ECR repository from our local machine, we'll use a combination for the AWS and Docker CLIs to issue commands to ECR.
This can get quite confusing, but hopefully I've done a good job of simplifying this process. For the next few steps, you will need to keep the following credentials handy:
- Your AWS Region
- Your AWS Account ID
-
First, let's make sure our local
aws-cli
is properly credentialed and authenticated. Runaws configure
and enter your AWS credentials. -
Next, we need to authenticate Docker with ECR. Run the following:
aws ecr get-login-password --region [region] | docker login --username AWS --password-stdin [aws_account_id].dkr.ecr.[region].amazonaws.com
If that is successful, you should see Login successful
output to the console.
So, by now you should have a Docker image of the node-test
application and you should be logged into and your docker-cli
is properly authenticated with the was-cli
, and by extension, ECR.
The next step, is to create a repo on ECR that contains the Docker image.
aws ecr create-repository \
--repository-name node-test \
--image-scanning-configuration scanOnPush=true \
--region [region]
If that was successful, the console should output some JSON related to the repository you just created:
"repository": {
"repositoryArn": "arn:aws:ecr:us-east-1:[aws_account_id]:repository/node-test",
"registryId": "[aws_account_id]",
"repositoryName": "node-test",
"repositoryUri": "[aws_account_id].dkr.ecr.us-east-1.amazonaws.com/node-test",
"createdAt": "2020-11-12T21:20:42-06:00",
"imageTagMutability": "MUTABLE",
"imageScanningConfiguration": {
"scanOnPush": true
},
"encryptionConfiguration": {
"encryptionType": "AES256"
}
}
In the event that command was unsuccessful, run through the steps in this section again and make sure you have properly logged in to EC2 via Docker. If you're still having trouble and prefer to reference the actual AWS documentation, click here.
Author's Note:
There's a particular step at the beginning of the previously linked AWS documentation (Setting up with Amazon ECR) that instructs you to make an IAM user with Administrator Access in order to properly set up ECR. You can check that out here.
In the interest of simplicity and brevity, I forgoed including those instructions when writing this doc. While I did indeed create the Administrator IAM user as outlined there, I concluded that for the process of creating a new ECR repo via the docker-cli
, I was never asked for any credentials that related that Administrator account.
In the event that you are unable to create a repository via the steps outlined above, there is a slight chance that might be the problem. I doubt it though.
We're almost done here. All that's left is to tag the image, and push it to the newly created ECR repo:
-
Run the following to tag the image:
docker tag node-test:latest [aws_account_id].dkr.ecr.us-east-1.amazonaws.com/node-test:latest
-
Exhale, then push the image to the ECR repo:
docker push [aws_account_id].dkr.ecr.us-east-1.amazonaws.com/node-test:latest
Grab some water as this might take a bit. This particular image was 900mb. You should see some progress bars in the terminal to denote various processes taking place.
Once that's complete the console should output something like:
45e026833d6c: Pushed
fc2333117db3: Pushed
c6593bc326e9: Pushed
ae34c2333f98: Pushed
2cf748de010a: Pushed
a5e6a8bf3a79: Pushed
61c9990b1160: Pushed
8211c12c1c23: Pushed
1d3ec06e3d4f: Pushed
9e5330403dba: Pushed
8bd20dc0b7e5: Pushed
94b70b410c2a: Pushed
3567db1eb737: Pushed
latest: digest: sha256:[hash_seq_] size: [size of your image]
That's it. The repository itself should now be visible from your ECR console. If you still see a Welcome Screen of sorts when accessing ECR from your AWS console in the browser, navigate to the left-hand menu and select Repositories under the Amazon ECR heading (at the time of writing, it's the last item.) Also, be sure to select the correct region is selected from the top-left menu if the repositories list turns up empty.
For information on how to pull and delete your ECR repos, check out the end of doc. It's pretty straight forward.
Sources: