Skip to content

Instantly share code, notes, and snippets.

@voronenko-p
Created September 20, 2017 14:10
Show Gist options
  • Save voronenko-p/da13909eb0f17d081f696e936614c6de to your computer and use it in GitHub Desktop.
Save voronenko-p/da13909eb0f17d081f696e936614c6de to your computer and use it in GitHub Desktop.

Using ansible-container to build your next application base image

Most Dockerfiles start from a parent image. If you need to completely control the contents of your image, you at some moment understand, that you need your own base image. Application baseimage is a special Docker image that is configured for correct use within Docker containers and aware about your application requirements, expectations + set of additional tools.

Usually those include, but not limited to:

  • Modifications for Docker-friendliness.
  • Application specific libraries and frameworks
  • Some administration tools, that are especially useful for troubleshouting inside docker
  • Mechanisms for easily running multiple processes, without violating the Docker philosophy

Perhaps every company or agency dealing with dockerized applications have or implemented such one. Until now you could find it in a form of complex shell and makefiles, especially if baseimage is released for multiple OS-es.

The closer ansible-container 1.0.0 release is, the more chances that you would give a try to ansible to take care on your next dockerized application base image.

Let's briefly dive into components you might use in such image.

Image building boilerplate with ansible-container

Folders and files organization

You can launch building process using any tool you like. From my experience - I've come so far with the following approach: https://github.com/Voronenko/container-image-boilerplate

requirements.txt - define any specific py tools & libs you want to use together with ansible-container. .projmodules - list of dependencies (roles, deployables, etc) needed to compile base image. Format similar to gitmodules, but without direct links to commit. Example:

https://gist.github.com/26eeed453872c9a5060c2b0874f7d3de

init.sh && init_quick.sh - resolve external dependencies under specified locations (roles under roles, deployables under deployables, etc)

ansible.cfg - by default empty, but you can adjust parameters for your build process according to documentation.

version.txt - information file gitflow based releasing (x.y.z)

image.txt - additional information about image constructed for tagging and pushing parts.

Makefile && docker-helper.sh - orchestration utilities, discussed in the next chapter.

docker-compose.yml - usually I also provide docker-compose configuration, so the image can be immediately tried.

container.yml - definition of the image you are going to build

And, of course, .gitignore - we definitely do not want to commit external resources. p-env - virtual environment created during the build, ansible-deployment - transient output produced by ansible-container itself.

https://gist.github.com/7c7d97a5b8eb69369ec98ad3277bfe37

Build process orchestration

Makefile implement following phases: initialize

clean

Resets project to initial state by removing all build directories and artifacts

https://gist.github.com/8ab4f2eb953ef1f43923c20cb6f2fa73

Initialize

Parses .projmodules and checks out necessary roles and deployables

https://gist.github.com/1fe284c4536f5d69e1fcd747dad30af1

p-env/bin/ansible-container

Internal task - creates and initializes virtual environment with ansible-container under p-env/ directory.

https://gist.github.com/1948962964f899f2570ac6222190c18d

build

Executes ansible-container for container.yml, providing path to roles. Project name influences how produced image will be named.

https://gist.github.com/756741ad75dc6661cdae46b9a74e4228

run and stop

Potentially, ansible-container allows immediatelly to run and stop images in a way like docker-compose does. From my experience, that is not always working,

  • container.yml is based on 2nd version of the docker-compose spec, while I usually need at least 3.1 in production. Anyway steps are provided - they might work in your case. I usually end with the separate docker-compose.yml v3.1+ in the directory.

https://gist.github.com/277bfd457ccb10fb16855b4549c82a16

tag and push

Usually as a result of successful build, you want to push artifact to some registry. This highly depends on your project, and you most likely adjust it for your needs.

Providing example: properly tags api and nginx images built and pushes them to docker hub. https://gist.github.com/5e0b69d131c93c1da79286fcda7997fa

compose helpers

Launching, stoppling and dismounting docker-compose managed containers.

https://gist.github.com/89b123dcf1232758092f2d030ca3cf36

That's all. Back to primary target.

Building base image with ansible container.

For base image, I usually want:

  • agreed folder organization (I do not want to guess each time where and how I need to put logic to run),
  • robust init system.
  • optional support for running multiple processes per container (as long as container remains single logical unit - it does not contradict with Docker philosophy).
  • Depending on project I want to be able to use different base images and do not want to dive into compilation specifics each time.
  • Possibility to run logic under custom users inside container and synchronize, if necessary files and folders with host system.

And final point: as long as possible I want to keep my code organized better, than series of bash command that usually are hard to read and hard to adjust in a way we usually do with programs. And this is where ansible-container will help.

Why valid init for containers is important

Although people say, that proper docker microservice should run single dedicated process, it is not always achievable in real world. More correct to say, that Docker suggests the philosophy of running a single logical service per container. A logical service can consist of multiple OS processes. Sometimes you really need to run more than one service in a docker container. This is especially true, if you are adapting some application that previously was running in standalone vps environment.

Why init process is important: running processes can be visualized are ordered in a tree: each process can spawn child processes, and each process has a parent except for the top-most process. This top-most process is the init process. It is started when you start your container and always has PID 1. This init process is responsible for starting the rest of the system, including starting your application. When some process terminates, it turns into smth referred as "defunct process", also known as a "zombie process" (https://en.wikipedia.org/wiki/Zombie_process). In simple words, these are ones that are terminated but have not (yet) been waited for by their parent processes.

But what if parent process terminates (intentionally, or unintentionally)? What happens then to its spawned processes? They no longer have a parent process, so they become "orphaned" (https://en.wikipedia.org/wiki/Orphan_process).

And this is where the init process kicks in. It becomes new parent (adopts) orphaned child processes, even though they were never created directly by the init process. The operating system kernel automatically handles adoption. Moreover: the operating system expects the init process to reap adopted children too.

What if not? As long as a zombie is not removed from the system via a wait, it will consume a slot in the kernel process table, and if this table fills, it will not be possible to create further processes in the host system itself. Also, init system implemented wrong often leads to incorrect handling of processes and signals, and can result in problems such as containers which can't be gracefully stopped, or leaking containers which should have been destroyed.

More reading on a topic:

Upstart, Systemd, SysV usually are too heavy (overkill) to be used inside docker (+ not always easily possible). What are the options ?

Candidates for container init process

At a time of article writing, most often used init approaches were:

Custom written init script

as per docker documentation, https://docs.docker.com/engine/admin/multi-service_container/ Take a look on a Proof-of-concept example of such script below https://gist.github.com/5c9fc93147640d3aecd25d5dd1c495b1

Will work, but really does not guarantee reaping... Let's examine more robust alternatives.

Dumb-init

dumb-init is a simple process supervisor and init system designed to run as PID 1 inside minimal container environments (such as Docker). It is deployed as a small, statically-linked binary written in C.

dumb-init enables you to simply prefix your command with dumb-init. It acts as PID 1 and immediately spawns your command as a child process, taking care to properly handle and forward signals as they are received.

Project repo: https://github.com/Yelp/dumb-init

Tini

Tini advertises itself as a tiny but valid init for containers. Promises:

  • protection from software that accidentally creates zombie processes
  • ensures the default signal handlers work for the software you run in your Docker image.
  • easy to inject: Docker images that work without Tini will work with Tini without any changes.

Shipped as precompiled binary for hugh variety of platforms.

Project repo: https://github.com/krallin/tini

Runit

Runit is a cross-platform Unix init scheme with service supervision, a replacement for sysvinit, and other init schemes. It runs on GNU/Linux, BSD, can easily be adapted to other Unix operating systems. The program runit is intended to run as Unix process no 1, it is automatically started by the runit-init /sbin/init-replacement if this is started by the kernel.

Project website: http://smarden.org/runit/

S6

S6 is project, actually sibling of the RUnit by http://skarnet.org/software/s6/overview.html. S6 contains collection of utilities revolving around process supervision and management, logging, and system initialization. More over, specifically for a docker exists helper project, so-called "s6-overlay" https://github.com/just-containers/s6-overlay

S6 provides:

  • lightweight init process with support of initialization (cont-init.d), finalization (cont-finish.d) as well as fixing ownership permissions (fix-attrs.d).
  • The s6-overlay provides proper PID 1 functionality inside docker container. Zombie processes will be properly cleaned up.
  • Support for multiple processes in a single container ("services")
  • Usable with all base images - Ubuntu, CentOS, Fedora
  • Distributed as a single .tar.gz file, to keep your image's number of layers small.
  • A whole set of utilities included in s6 and s6-portable-utils. They include handy and composable utilities.
  • Log rotating out-of-the-box through logutil-service which uses s6-log under the hood.

My_Init

Part of the Phusion baseimage project https://github.com/phusion/baseimage-docker, which currently targets Ubuntu 16:04, Ubuntu 14:04 OS-es. Consists of custom written py file and wrapped optional runit.

Provides

  • protection from software that accidentally creates zombie processes - reaping is implemented as a part of control script.
  • in addition supports startup files in init.d and rc.local directories.
  • supports additional optional services inside container via runit: cron, ssh
  • handles additional magic with environment

Requires: python inside your container. In present form limited to Ubuntu base system only.

Supervisord ?

This is known process manager usually used with python applications. I often saw people trying to use it as init system. But: supervisor explicitly mentions that it is not meant to be run as your init process. If you have some subprocess fork itself off, it won’t be cleaned up by Supervisor. Thus you anyway would need to choose different init system.

Good if you anyway used it with your application earlier.

More ?

If you know more candidates, please comment.

Candidates for running multiple services inside container.

From mentioned above and at the same time lightweight, worse to mention:

SupervisorD

Classic supervisor which does not even require root privileges. Ctl script that acts similar to systemd's systemctl. Supports processes restarting, as well as event handlers based on shell protocols.

RUnit

Runit ships with swiss-knife set of utilities, one of them is runsvdir (http://smarden.org/runit/runsvdir.8.html) It allows defining set of "service definitions" in some directory, and makes care on launching them at startup.

Typical interaction examples:

/usr/bin/sv status /etc/service/ - get status of services listed in configuration folder

/usr/bin/sv -w 10 down /etc/service/* - shutdown all services with timeout 10

S6 (in scope of S6-overlay project)

Unlike supervisor, s6 uses a folder structure to control services, similar to RUnit. S6-overlay requires them under /etc/services.d before running init (it then copies them over to /var/run/s6/services).

https://gist.github.com/279b1b1934f0f7d12c5dd6ef12448c9e

Now, each of those run files is an executable that s6 executes to start the process.

https://gist.github.com/b79d8ff51b5cadd68cabcaf6773166ba

Services are controlled by s6-svc binary. It has number of options, but the main idea is that you give it the directory of the service. So for example, if I wanted to send SIGHUP to nginx, I would do s6-svc -h /var/run/s6/services/nginx. Note: that it’s /var/run and not /etc/services.d; This is hugh difference from RUnit. And lastly, the -h is for SIGHUP.

s6-overlay comes with a number of built versions, so you can download the one that matches your Linux setup. If you want to use s6 directly, users of Alpine and a few other flavors of Linux can just install it from their package manager. We’re running Debian and there’s no PPA for it, so we would have to compile s6 on our own

More ?

If you know more candidates, please comment.

Running processes as a different user

By default, Docker containers run as the root user. This is bad because:

  • Application might modify up things that it shouldn't be
  • If application shares folder with base host, all created files will be owned by root
  • If container is compromised - well, still it is bad if they're root.

If you want to understand, how uid and gid work in Docker containers, take a look on that article: https://medium.com/@mccode/understanding-how-uid-and-gid-work-in-docker-containers-c37a01d01cf

What are the options:

Docker's native

Set on the level of Dockerfile

https://gist.github.com/c562f9abd0b1b62d17de7fa0e6ac40b4

Success scenario, but not always possible, if container communicates with the host.

chpst

This is utility which is shipped with RUnit (http://smarden.org/runit/chpst.8.html) - you can easily use it if you already have RUnit as part of your init system.

Phusion's setuser

If you have chosen phusion image as a basis, you have a setuser in path
https://github.com/phusion/baseimage-docker/blob/master/image/bin/setuser

sudo

Well, you can install sudo inside container. Some people do. But do you want?

Your own baseimage built with ansible-container

With information above you have already few ideas for constructing your base image.

Sorry for repeating, but let me emphasize once more: if you want alternative to custom makefiles and tons of hard-to-read shell files, give a try to ansible-container. Ansible-container provides better alternative to the command && command && command (and so on) syntax you’ve been struggling with to build containers. Since Ansible is at the heart of Ansible Container, you can make container builds completely predictable and repeatable, and more over readable.

I have to admit, that although it is already passed 0.9.1, ansible-container is still at it's early ages with some cumbersome side effects from time to time (https://medium.com/@V_Voronenko/evaluating-ansible-container-as-a-tool-for-custom-docker-containers-build-500a0395a4c8), but it really becomes more robust each minor release, thus if you try it now - your production will be ready for concept when it is released...

Previously running ansible inside container required installing a lot of unnecessary packages, which was causing bigger image sizes. Ansible-cointainer introduced different approach: combination of conductor (managing container with ansible and necessary tools installed) and target container. This allows to keep target image size small and this approach will work on any base image, which allows ansible and python to be installed.

Aim of the current proof of concept: build bootstrap role for building base application image, and thus simplify application play itself.

We want: select preferred init system: tini, dumb-init or init approach by Phusion (phusion-init)

https://gist.github.com/febcdee505cf0251c356fef46b44382f

We want: have possibility to choose internal service layer: runit, supervisor or s6. https://gist.github.com/82c3676bcaabfeb6d2fca3353a89a34d

Following the Phusion's base image concept, we want optional cron, sshd, and syslog services inside container.

https://gist.github.com/36a4b6ccf7030bbb3435eaf98a538a0b

If we ever wanted ssh inside container, let's provide keypair to trust.

https://gist.github.com/b74b322a0a6fb08311d906dd408a0ee2

Resulting play is compact and readable like your usual ansible play. https://gist.github.com/dbd2b150758bd34be9f72db51cd87657

More over - we are not limited to some single approach - we are free to select init system and service system from list of supported. You can always add the new one.

In order to provide some compatibility between systems, I usually

a) put services runners into /etc/service//run - this supports both running via shell, runit, s6. Supervisord might re-execute the same script. b) put image pre-initialization into /etc/myinit.d/*.sh c) always name entry point as docker-entrypoint.sh d) your role, thanks to ansible, migth detect and target multiple basesystems - thus you can choose, for example, often used Alpine or Jessie.

Take a look on role example, that might be used to build such base image: https://github.com/softasap/sa-container-bootstrap

Typical ansible-container play used to build application image using your base play one might look as:

tini init system based, with supervisor as service manager

https://gist.github.com/db5db62855d95608c18c6e234b367ea2

tini init system based, with supervisor as service manager

https://gist.github.com/0277c54e9e4fc2ae9e7ce6187fcc64da

Close to Phusion's base image

https://gist.github.com/0fa03202a61625b400f5a0075cb40e46

More examples on https://github.com/Voronenko/devops-docker-baseimage-demo

Few figures on produced sizes

https://gist.github.com/00c943c3b2ef45edfa2e27a928a1aeb6

Summary

Ansible-container, once it reaches stable version hopefully this year, appears to be very promising tool for introducing complex docker based build pipelines. In addition, ansible community allows you to re-use number of available roles - that potentially can streamline your deployment implementation path.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment