Skip to content

Instantly share code, notes, and snippets.

@eyablokov
Last active January 15, 2018 10:48
Show Gist options
  • Save eyablokov/9b167d719821c9148c339d5ea3c0e6d2 to your computer and use it in GitHub Desktop.
Save eyablokov/9b167d719821c9148c339d5ea3c0e6d2 to your computer and use it in GitHub Desktop.
Task 2

Dockerizing a Spring Boot Application

Overview

In this article we’ll focus on how to dockerize a Spring Boot Application to run it in a container. Furthermore I’ll show how to create a composition of containers, which depend on each other and are linked against each other in a virtual private network. We’ll also see how they can be managed together with single commands.

Let’s start by creating a Java-enabled, lightweight base image, running Alpine Linux.

Common Base Image

We’re going to be using Dockerfile.

FROM alpine:edge
RUN apk add --no-cache openjdk8
COPY files/UnlimitedJCEPolicyJDK8/* /usr/lib/jvm/java-1.8-openjdk/jre/lib/security/
  • FROM: The keyword FROM, tells Docker to use a given image with its tag as build-base. If this image is not in the local library, an online-search on DockerHub, or on any other configured remote-registry, is performed.
  • RUN: With the RUN command, we’re executing a shell command-line within the target system. Here we utilizing Alpine Linux’s package manager apk to install the Java 8 OpenJDK.
  • COPY: The last command tells Docker to COPY a few files from the local file-system, specifically a subfolder to the build directory, into the image in a given path

REQUIREMENTS: In order to run the tutorial successfully, you have to download the Java Cryptography Extension (JCE) Unlimited Strength Jurisdiction Policy Files from Oracle website. Simply extract the downloaded archive into a local folder named ‘files’.

To finally build the image and store it in the local library, we have to run:

docker build --tag=alpine-java:base --rm=true .

NOTICE: The --tag option will give the image its name and –rm=true will remove intermediate images after it has been built successfully. The last character in this shell command is a dot, acting as a build-directory argument.

Dockerize a Standalone Spring Boot Application

As an example for an application which we can dockerize, we will take the spring-cloud-config/server from the spring cloud configuration tutorial. As a preparation-step, we have to assemble a runnable jar file and copy it to our Docker build-directory:

cd spring-cloud-config/server
mvn package spring-boot:repackage
cp target/server-0.0.1-SNAPSHOT.jar  ../../spring-boot-docker/files/config-server.jar
cd ../../spring-boot-docker

Now we will create a Dockerfile named Dockerfile.server with the following content:

FROM alpine-java:base
COPY files/spring-cloud-config-server.jar /opt/spring-cloud/lib/
COPY files/spring-cloud-config-server-entrypoint.sh /opt/spring-cloud/bin/
ENV SPRING_APPLICATION_JSON='{"spring": {"cloud": {"config": {"server": {"git": {"uri": "/var/lib/spring-cloud/config-repo", "clone-on-start": true}}}}}}'
ENTRYPOINT ["/usr/bin/java"]
CMD ["-jar", "/opt/spring-cloud/lib/spring-cloud-config-server.jar"]
VOLUME /var/lib/spring-cloud/config-repo
EXPOSE 8888
  • FROM: As base for our image we will take the Java-enabled Alpine Linux, created in the previous section.
  • COPY: We let Docker copy our jar file into the image.
  • ENV: This command lets us define some environment variables, which will be respected by the application running in the container. Here we define a customized Spring Boot Application configuration, to hand-over to the jar-executable later.
  • ENTRYPOINT/CMD: This will be the executable to start when the container is booting. We must define them as JSON-Array, because we will use an ENTRYPOINT in combination with a CMD for some application arguments.
  • VOLUME: Because our container will be running in an isolated environment, with no direct network access, we have to define a mountpoint-placeholder for our configuration repository.
  • EXPOSE: Here we are telling Docker, on which port our application is listing. This port will be published to the host, when the container is booting.

To create an image from our Dockerfile, we have to run:

docker build --file=Dockerfile.server --tag=config-server:latest --rm=true .

But before we’re going to run a container from our image, we have to create a volume for mounting:

docker volume create --name=spring-cloud-config-repo

NOTICE: While a container is immutable, when not committed to an image after application exits, data stored in a volume will be persistent over several containers.

Finally we are able to run the container from our image:

docker run --name config-server -p 8888:8888 -v spring-cloud-config-repo:/var/lib/spring-cloud/config-repo config-server:latest

First, we have to –-name our container. If not, one will be automatically chosen. Then, we must -–publish our exposed port (see Dockerfile) to a port on our host. The value is given in the form host-port:container-port. If only a container-port is given, a randomly chosen host-port will be used. If we leave this option out, the container will be completely isolated. The -–volume option gives access to either a directory on the host (when used with an absolute path) or a previously created Docker volume (when used with a volume-name). The path after the colon specifies the mountpoint within the container. As argument we have to tell Docker, which image to be used. Here we have to give the image-name from the previously docker build step. Some more options which could be useful:

  • -it: enable interactive mode and allocate a pseudo-tty
  • -d: detach from the container after booting

If we ran the container in detached mode, we can inspect its details, stop it and remove it with the following commands:

docker inspect config-server docker stop config-server docker rm config-server

Dockerize Dependent Applications in a Composite

Docker commands and Dockerfiles are particularly suitable for creating individual containers. But if you want to operate on a network of isolated applications, the container management quickly becomes cluttered.

To solve that, Docker provides a tool named Docker Compose. This comes with an own build-file in YAML format and is better suited in managing multiple containers. For example: it is able to start or stop a composite of services in one command, or merges logging output of multiple services together into one pseudo-tty.

Let’s build an example of two applications running in different docker containers. They will communicate with each other and be presented as “single unit” to the host system. We will build and copy the spring-cloud-config/client example described in the spring cloud configuration tutorial to our files folder, like we have done before with the config-server.

This will be our docker-compose.yml:

version: '2'
services:
    config-server:
        container_name: config-server
        build:
            context: .
            dockerfile: Dockerfile.server
        image: config-server:latest
        expose:
            - 8888
        networks:
            - spring-cloud-network
        volumes:
            - spring-cloud-config-repo:/var/lib/spring-cloud/config-repo
        logging:
            driver: json-file
    config-client:
        container_name: config-client
        build:
            context: .
            dockerfile: Dockerfile.client
        image: config-client:latest
        entrypoint: /opt/spring-cloud/bin/config-client-entrypoint.sh
        environment:
            SPRING_APPLICATION_JSON: '{"spring": {"cloud": {"config": {"uri": "http://config-server:8888"}}}}'
        expose:
            - 8080
        ports:
            - 8080:8080
        networks:
            - spring-cloud-network
        links:
            - config-server:config-server
        depends_on:
            - config-server
        logging:
            driver: json-file
networks:
    spring-cloud-network:
        driver: bridge
volumes:
    spring-cloud-config-repo:
        external: true
  • version: Specifies which format version should be used. This is a mandatory field. Here we use the newer version, whereas the legacy format is ‘1’.
  • services: Each object in this key defines a service, a.k.a container. This section is mandatory.
    • build: If given, docker-compose is able to build an image from a Dockerfile.
      • context: If given, it specifies the build-directory, where the Dockerfile is looked-up.
      • dockerfile: If given, it sets an alternate name for a Dockerfile
    • image: Tells Docker which name it should give to the image when build-features are used. Otherwise it is searching for this image in the library or remote-registry.
    • networks: This is the identifier of the named networks to use. A given name-value must be listed in the networks section.
    • volumes: This identifies the named volumes to use and the mountpoints to mount the volumes to, separated by a colon. Likewise in networks section, a volume-name must be defined in a separate volumes section.
    • links: This will create an internal network link between this service and the listed service. This service will be able to connect to the listed service, whereby the part before the colon specifies a service-name from the services section and the part after the colon specifies the hostname at which the service is listening on an exposed port.
    • depends_on: This tells Docker to start a service only, if the listed services have started successfully. NOTICE: This works only at container level! For a workaround to start the dependent application first, see config-client-entrypoint.sh.
    • logging: Here we are using the json-file driver, which is the default one. Alternatively syslog’ with a given address option or none can be used
  • networks: In this section we’re specifying the networks available to our services. In this example we let docker-compose create a named network of type bridge for us. If the option external is set to true, it will use an existing one with the given name.
  • volumes: This is very similar to the networks section.

Before we continue, we will check our build-file for syntax-errors:

docker-compose config

This will be our Dockerfile.client to build the config-client image from. It differs from the Dockerfile.server in that we additionally install OpenBSD netcat (which is needed in the next step) and make the entrypoint executable:

FROM alpine-java:base
RUN apk --no-cache add netcat-openbsd
COPY files/config-client.jar /opt/spring-cloud/lib/
COPY files/config-client-entrypoint.sh /opt/spring-cloud/bin/
RUN chmod 755 /opt/spring-cloud/bin/config-client-entrypoint.sh

And this will be the customized entrypoint for our config-client service. Here we use netcat in a loop to check whether our config-server is ready. You have to notice, that we can reach our config-server by its link-name, instead of an IP address:

#!/bin/sh

while ! nc -z config-server 8888 ; do
    echo "Waiting for upcoming Config Server"
    sleep 2
done

java -jar /opt/spring-cloud/lib/config-client.jar

Finally we can build our images, create the defined containers and start it in one command:

docker-compose up --build

To stop the containers, remove it from Docker and remove the connected networks and volumes from it, we can use the opposite command:

docker-compose down

A nice feature of docker-compose is the ability to scale services. For example, we can tell Docker to run one container for the config-server and three containers for the config-client.

But for this to work properly, we have to remove the container_name from our docker-compose.yml, for letting Docker choose one, and we have to change the exposed port configuration, to avoiding clashes.

After that, we are able to scale our services like so:

docker-compose build docker-compose up -d docker-compose scale config-server=1 config-client=3

Conclusion

As we’ve seen, we are now able to build custom Docker images, running a Spring Boot Application as a Docker container and creating dependent containers with docker-compose.

For further reading about the build-files, we refer to the official Dockerfile reference and the Docker Compose reference.

Multiple Docker containers with same image, but different config

To generate the configuration, you either:

  1. volume mounts: Use volumes and mount the file during container start docker run -v as we did above (and similar with docker-compose.yml). It's possible to repeat this as often as we like, so we could mount several configs into container (so the runtime-version of the image). We've created those configs on the host before running the container and need to ship those files with the container, which is the downside of this approach (portability).

  2. entry-point based configuration (generation): Most of the advanced docker images do provide a complex so called entry-point which consumes ENV variables we could pass when starting the image, to create the configuration(s), like here, so when run this image, we can do docker run -e MYSQL_DATABASE=myapp percona, as an example, and this will start Percona and create the database percona. This is all done by:

    • adding the entry-point script here.
    • do not forget to copy the script during image build.
    • then during the image-startup, your ENV variable will cause this to trigger.

    The entry-point strategy is very common and very powerful and I would suppose to go this route whenever is possible.

  3. Derived images: Maybe for "completeness", the image-derive strategy, so when have base image called "myapp" and for the installation X we create a new image:

    from myapp
    COPY my.ini /etc/mysql/my.ini
    COPY application.yml /var/app/config/application.yml

    And call this image myapp:x. The obvious issue with this is, we end up having a lot of images, on the other side, compared to a) its much more portable.

Run multiple Docker environments (qa, stage, prod) from the same Docker Compose file

For example, I'd like my-docker-test-site.com to map to the production container, qa.my-docker-test-site.com the qa container of my site, etc. I'd rather not access my-docker-test-site.com:7893 or some port for qa, stage, etc.

To accomplish this we are going to use jwilder/nginx-proxy. We'll be using the pre-build container directly.

To spin this up on our local system let's issue the following command:

docker run -d -p 80:80 --name nginx-proxy -v /var/run/docker.sock:/tmp/docker.sock:ro jwilder/nginx-proxy

This project is great, now, as we add new or remove containers they will automatically be added/removed to the proxy and we should be able to access their web servers through a VIRTUAL_HOST. (More on how specifically below)

Networking

Before we get too far into the container environment of our app, we need to consider how the containers will be talking to each other.

We can do this using the docker network commands. So we're going to create a new network and then allow the nginx-proxy to communicate via this network.

First we'll create a new network and give it a name of service-tier:

docker network create service-tier

Next we'll configure our nginx-proxy container to have access to this network:

docker network connect service-tier nginx-proxy

Now when we spin up new containers we need to be sure they are also connected to this network or the proxy will not be able to identify them as they come online. This is done in a docker-compose file as seen below.

Put the two together

Now that we've defined our application with the server.js and Dockerfile and we have a nginx-proxy ready to proxy to our environment-specific docker http servers, we're going to use docker-compose to help build our container and glue the parts together as well as pass environment variables through to create multiple deployment environments.

Save this file as docker-compose.yml:

version: '2'

services:
  web:
    build: ./app/
    environment:
      - NODE_ENV=${NODE_ENV}
      - PORT=${PORT}
      - VIRTUAL_HOST=${VIRTUAL_HOST}
      - VIRTUAL_PORT=${PORT}
    ports:
      - "127.0.0.1:${PORT}:80"

networks:
  default:
    external:
      name: service-tier

This file is all about:

  • The *build: ./app/ is the directory where our Dockerfile build is.
  • The list of environment variables are important. The VIRTUAL_HOST and VIRTUAL_PORT are used by the nginx-proxy to know what port to proxy requests for and at what host/domain name. (We'll show an example later) You can see an earlier exploratory post I wrote explaining more about environment vars.
  • The ports example is also important. We don't want to access the container by going my-docker-test-site.com:8001 or whatever port we're actually running the container on because we want to use the VIRTUAL_HOST feature of nginx-proxy to allow us to say qa.my-docker-test-site.com. This configuration sets it up to only listen on the loopback network so the nginx-proxy can proxy to these containers but they aren't accessible from the inter-webs.
  • Lastly the networks: we define a default network for the web app to use the service-tier that we setup earlier. This allows the nginx-proxy and our running instances of the web container to correctly talk to each other.

So with all of these pieces in place, all we need to do now is run some docker-compose commands to spin up our necessary environments.

Below is an example script that can be used to spin up qa, and prod environments.

BASE_SITE=my-docker-test-site.com

# qa
export NODE_ENV=qa
export PORT=8001
export VIRTUAL_HOST=$NODE_ENV.$BASE_SITE
docker-compose -p ${VIRTUAL_HOST} up -d

# prod
export NODE_ENV=production
export PORT=8003
export VIRTUAL_HOST=$BASE_SITE
docker-compose -p ${VIRTUAL_HOST} up -d

This script is setting some environment variables that are then used by the docker-compose command where and we're also setting a unique project name with -p ${VIRTUAL_HOST}.

Senstivie data

In Docker, there's Docker Secrets presented. It's a well way to store all needed - login/password pairs, SSH keys, etc. Secrets will be available exactly inside of containers, if will be specified as arguments of docker command. By default, container can access secrets at /run/secrets/ directory, but it's customizable.

Another option could be the Vault.

Kubernetes has its own secreting, which is very similar to Docker Secrets.

Thank you.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment