Skip to content

Instantly share code, notes, and snippets.

@ostens
Last active August 5, 2021 09:12
Show Gist options
  • Save ostens/678f9e030b8adbb24d5cc60599dcce95 to your computer and use it in GitHub Desktop.
Save ostens/678f9e030b8adbb24d5cc60599dcce95 to your computer and use it in GitHub Desktop.
Deploying our application in containers with the help of nginx

Deploying our application in containers with the help of nginx

Contents

Introduction

We've spent the last few months working on a javascript front-end and java back-end web application. When we want to deploy it locally, there are three different components we need to run:

  • The front-end: a React app using node's package manager (npm). It's deployed using npm run start (react-scripts start).

  • The back-end: a Spring Boot application using the Gradle build tool. It's deployed using bootRun.

  • The router: an nginx container acting as a reverse proxy, deployed using docker-compose up -d:

    • listens on port 80
    • routes /api/ requests to the backend on port 57040
    • routes all other requests to the frontend on port 3000

Deploying three different components separately can get annoying, particularly when you do it a lot and the series of commands is the same every time. As we were already having to run docker-compose up to start the router container and we had already containerized the front- and back-end for integration tests, it seemed an obvious next step to get compose to deploy all three components together.

Deploying the front- and back-end in containers would also give them more isolation; building an application and its dependencies in a container better replicates real deployment as it avoids interference from a local working environment.

So, with minimal docker and nginx experience, I set about the task of linking together these containers so that I could start our application with one command.

Reverse proxy

Before we go any further, a quick definition. A proxy server is an intermediary separating users from websites - requests and responses are all passed through the proxy on their way between the client and server.

As well as passing on requests, proxy servers can be used for modifying requests. This can be for an honest reason (such a load balancing) or for a malicious one (such as concealing the source of a request).

We can think about different proxies doing slightly different jobs:

  • A proxy server that simply passes unmodified requests and responses is usually called a gateway.
  • A forward proxy is usually an internet-facing proxy which acts on behalf of a requester (e.g. a proxy in a browser to not let Netflix know which country you're in)

You => Proxy => Internet => Server

  • A reverse proxy is usually an internal-facing proxy which acts on behalf of a service, forwarding requests to one or more ordinary servers (e.g. Netflix's proxy to load-balance traffic to different servers)

You => Internet => Proxy => Server

In our set-up, nginx acts as proxy and sits next to the server, so it can be thought of as a reverse proxy.

In our current set-up, we already had docker installed and two configuration files:

  • nginx-local-dev.conf: for defining the nginx routing
  • docker-compose.yml: for running the nginx container

We also had two dockerfiles:

  • client.Dockerfile for serving the client
  • server.Dockerfile for running the server

nginx config

nginx uses a configuration file to determine which requests should be filtered to which proxy server. In our simple nginx configuration file we only have one proxy server configured to listen on port 80, but if we had multiple servers listening on port 80, nginx would look at server_name to determine which server to filter the request to. As we want to listen to all requests from localhost:80, we should pass server_name as localhost.

We then check to see if the request contains /api/ and if it does, we forward it to http://host.docker.internal:57040/api/ or port 57040 on the local network. Otherwise, we forward it to http://host.docker.internal:3000/ or port 3000 on the local network.

server {
    listen       80;
    server_name  localhost;

   location /api/ {
       proxy_pass http://host.docker.internal:57040/api/;
   }
 
   location / {
       proxy_pass http://host.docker.internal:3000/;
   }
}

We use compose which is a tool for running multi-container applications but which is also useful because it allows us to only have to remember a very simple command (docker-compose up). The container configuration is defined in a docker-compose.yml file:

version: "2"

services:
  router:
    image: nginx:latest
    volumes:
      - ./nginx-local-dev.conf:/etc/nginx/conf.d/default.conf
    ports:
      - 80:80

Here we can see that the nginx configuration file is added to the container via a volume, and there's no need to use a Dockerfile because the nginx image is public and we can pull it from dockerhub. The only other set-up required is exposing port 80 on the container and linking it to port 80 on our host machine.

Client dockerfile

Even though it wasn't hooked up to nginx, we already had a client dockerfile which we'd been using to run front-end integration tests with puppeteer:

FROM node

COPY src /src
COPY package.json /

RUN npm i
RUN npm run build
COPY serve.json /build

RUN npm i serve -g

ENTRYPOINT ["serve", "build", "-l", "3000"]

This uses the basic node image from dockerhub and copies onto it our source files before installing our dependencies, building our frontend and using serve with listen endpoint 3000.

Important: An entrypoint runs a command when a container is started, rather than when an image is built.

Server dockerfile

We also had a server dockerfile which we'd used for back-end integration tests with rest-assured:

FROM openjdk:8
COPY build/libs /build
RUN mv /build/ipp-server-*.jar /insightapp.jar
RUN mkdir /ipp-libs/
RUN mv /build/libsqlite4java-linux-amd64*.so /ipp-libs/libsqlite4java-linux-amd64.so
RUN rm -r /build/
ENTRYPOINT [ "java", "-jar", "-Djava.library.path=/ipp-libs", "-Dspring.profiles.active=test", "/insightapp.jar" ]

This uses the basic openjdk:8 image from dockerhub, copies the build folder and jar; and then runs the jar with the spring profile set as test.

Migration

We already had pretty much everything we would need to deploy the application as a whole.

The nginx configuration file remarkably didn't need any changes. It was already forwarding traffic from the host's port 80 to the host's ports 57040 and 3000 using host.docker.internal:port. It was just a case of adding new (client and server) services to docker-compose.yml and exposing ports 3000 and 57040 respectively.

Our first new service in our docker-compose.yml was the client:

  client:
    build:
      context: client/.
    ports:
      - "3000:3000"
    healthcheck:
      test: ["CMD", "curl", "-f", "http://localhost:3000"]
      interval: 1s
      timeout: 1s
      retries: 10

This builds the client dockerfile, exposes port 3000 in the client container and routes it to port 3000 in the host, and runs a healthcheck.

The server service looks similar:

  server:
    build: server/.
    ports:
      - "57040:57040"
    healthcheck:
      test: ["CMD", "curl", "-f", "http://localhost:57040/api/actuator/health"]
      interval: 2s
      timeout: 20s
      retries: 10

This builds the server dockerfile, exposes port 57040 in the server container and routes it to port 57040 in the host, and also runs a healthcheck (with a longer timeout because starting the server takes longer than the client).

We also extend the router service slightly:

  router:
    image: nginx:latest
    ports:
      - "80:80"
    volumes:
      - ./nginx-local-dev.conf:/etc/nginx/conf.d/default.conf
    depends_on:
      - client
      - server
    healthcheck:
      test: ["CMD", "service", "nginx", "status"]
      interval: 0.1s
      timeout: 1s
      retries: 10

This specifies that before it starts, it should wait for the client and server services to be running and healthy.

Success - I can now run the whole application with a docker-compose up -d in the root. Is it worth it? Yes, I think so - it was really simple to set this up using docker compose and nginx and it got me thinking about services and how they connect up. And nowadays I don't even open VSCode or IntelliJ to run my app - I just type one command into bash.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment