Skip to content

Instantly share code, notes, and snippets.

@vaaas
Last active April 19, 2022 15:42
Show Gist options
  • Save vaaas/7ccdec9eaa5f35548bcb6d36ac67b1a3 to your computer and use it in GitHub Desktop.
Save vaaas/7ccdec9eaa5f35548bcb6d36ac67b1a3 to your computer and use it in GitHub Desktop.
Bespoke devops with simple UNIX tools

Assumptions :

  • You have a web service, which may be a monolith or a microservice mesh, but either way it has more than one moving part.
  • You desire to containerise the moving parts.
  • You use systemd.
  • You know your way around UNIX utilities and shell script.

What we will achieve :

  • A build server that creates packages for your distribution. These packages may be your application, or container images.
  • A repository where you can point your package manager in order to install and update your packages.
  • A configuration server hosts a hierarchical key-value database, acting as a central registry for configuration.
  • Hosting the containers with machinectl.
  • A reverse proxy to route them all.
  • Automatic build installation and configuration.

Pros of this approach :

  • You get to tailor an architecture exactly to your needs.
  • Unopinionated and flexible.
  • No vendor lock-in.
  • Very few dependencies.
  • Using standard utilities that already solve software distribution. No reinventing the wheel !
  • Significant network savings, as images are rarely updated, only applications are.
  • You invest in deepening your knowledge of Linux and your distribution's architecture instead of random third-party tools that come and go like Paris fashion week.

Cons of this approach :

  • Less cohesive architecture ; you use a patchwork of scripts.
  • Requires more of an upfront investment, both in reading and in implementation.
  • If you have rather extreme microservices needs, eg thousands of containers, more dedicated tools will probably serve you better.
  • Conversely if your monolith is entirely self-contained in a single image and has only one moving part, this approach is overkill.

Hypervisor

We will use machined to manage our containers. This is our only dependency. You can find it as systemd-container in Debian. You should consult the machinectl manpage, but here's a quick summary.

Your containers are placed in /var/lib/machines. The directory name becomes your container name.

Container configuration is placed in /etc/systemd/nspawn/CONTAINERNAME.nspawn. If no configuration exists, the defaults will be used, which may not be what you want.

machinectl provides commands for starting, stopping, rebooting, enabling and disabling your containers. By enabling a container, it will be started automatically on server boot.

Useful nspawn settings

All systemd service files are written in ini format. You should definitely consult the documentation, but here's the settings I primarily fiddle around with :

Ephemeral=on means changes to the container's file system will be discarded when it's stopped. Keep in mind that LinkJournal will need to be off for ephemeral containers.

Bind=source:dest allows you to bind directories from the host to the container. This is extremely useful if you want your containers to have persistent state, for example file or database storage. That way, persistent data can live in your container. Another example is binding a directory of UNIX domain sockets.

BindReadOnly is similar, but only allows reads. You could use this to bind configuration. You can bind the application itself, if you want to store it in the host but run it in a container.

Private enables or disables private networking. This is useful if you want your container to access the host's ports, for example to reverse proxy them, or if you don't want to bind ports from host to guest.

PrivateUsers likewise enables or disables namespacing of users and groups. I often disable private users, because containerisation takes care of segregation between services, so the traditional UNIX user model is redundant. Within the container, the handful of services all run as root.

Hostname sets your container hostname. This is useful because you can namespace configuration by hostname. That way, if you have two parallel copies of an app, called app1 and app2 they can fetch different settings because they have different hostnames.

Sample

[Exec]
Ephemeral=on
LinkJournal=no
PrivateUsers=no

[Network]
PrivateEthernet=no

Fetching images

Every distribution is different, but here is how to do it in debian. apt can fetch packages over HTTP. You write new repositories in /etc/apt/sources.list.d/REPONAME.list

Here is a sample :

deb [arch=amd64 trusted=yes] http://TOKEN@HOST:PORT/apt stable main

We will serve packages with an HTTP Basic Authentication challenge. That way, third parties can't use your repos. We set the architecture we'll be targeting (for 64-bit x86 servers, you want amd64), and we consider the repository trusted, so we don't have to sign our packages.

Afterwards, you can :

apt update
apt install my-container-image

And apt will take care of everything for you.

Debian provides unattended upgrades for if you want to fetch fresh packages automatically. Only upgraded packages will be installed.

Configuration server

Instead of reinventing the wheel, you can use dconf, which is a hierarchical key-value database used in the GNOME project. You can think of it as similar to the Windows registry.

You can interact with the dconf database through the dconf binary, provided by the package dconf-cli. You can examine the manpage, but here's the gist of it :

# reading a key
dconf read /org/myorg/app/db/password

# reading an entire hierarchy (ini format). trailing / is significant
dconf dump /org/myorg/app/

# writing a key
dconf write /org/myorg/app/db/password \'cool_password\'

If it's going to be public-facing, you should also use some kind of authentication, because otherwise your configuration (which likely includes passwords) will be exposed.

You can very trivially translate URLs to the dconf hierarchy. A sample implementation can be found in remote-dconf.

Build server

The build server will house an HTTP server with basic authentication and some shell scripts that automate the building of application and container packages.

In order not to bake URLs for the build and configuration server into your scripts, you can manage variables centrally in /etc/environment, from which all processes inherit their environment variables.

Every distribution has its own way of packaging, but here is the gist for debian and deb packages.

Creating a deb file

You create a directory in this format :

package-name_version_arch

You will probably want amd64 as the architecture. If you are constantly rolling out new releases, you can use a timestamp for the version number.

Files you place in your root package directory will be places in the corresponding locations in the root of your system upon installation. That way, coolpkg_1.0.0_amd64/etc/systemd/system/coolpkg.service becomes /etc/systemd/system/coolpkg.service. apt takes care of copying or deleting the files.

The special DEBIAN directory contains metadata about your package. The control file defines information about the package :

Package: coolpkg
Version: 1.0.0
Architecture: amd64
Maintainer: John Doe <test@example.com>
Description: A very cool package
Depends: php-cli, bash, dconf-cli

Note the Depends field. apt will fetch and install all dependencies before installing your package. For example, let's say you have a socket-server-image and a http-server-image, both of which extend a base-image by overlaying on top of it.

You can read more about control files in the relevant manpage.

You can also place executable postinst and preinst files inside DEBIAN. These will be executed after or before a package is installed. For example, you should stop the container's execution before upgrading it, and then restart it.

When you're done, you can create your package with :

dpkg-deb --build --root-owner-group YOUR_PACKAGE_ROOT_DIR

Creating a repository

Let's say you want the root of your repository to be in /srv/apt; an HTTP server will run with /srv as root.

Your deb files should be placed in /srv/apt/pool/main. You will also need an index file, which provides an listing of all packages available. Given the amd64 architecture, you need to create the directory /srv/apt/dists/stable/main/binary-amd64. Then you can scan your packages and pipe the output to the index file as such :

cd /srv/apt
dpkg-scanpackages -m --arch amd64 pool > dists/stable/main/binary-amd64/Packages

Building an application

Ideally, you should place a build script in the git repository of your project. Here is a sample build script for a Laravel application :

#!/bin/sh

# clean previous runs
rm -rf public/js

# build the front-end
npm run production

# update composer autoload
composer dump-autoload -a

# create the package
basename='coolpkg'
timestamp=$(date '+%s')
dirname="$basename"_"$timestamp"_amd64
mkdir -p $dirname/opt/$basename $dirname/etc/systemd/system

# copy files over
cp -pr --parents .env composer.json composer.lock artisan app bootstrap config database public routes scripts storage vendor resources/views $dirname/opt/$basename

# building the package directory
mkdir -p $dirname/DEBIAN
chmod 755 $dirname/DEBIAN

cat << EOF > $dirname/DEBIAN/control
Package: $basename
Version: $timestamp
Architecture: amd64
Maintainer: John Doe <test@example.com>
Description: Cool Package
Depends: php-fpm, nginx, php-apcu, php-curl
EOF

# preinst script
cat << EOF > $dirname/DEBIAN/preinst
#!/bin/sh
systemctl stop nginx php7.4-fpm || true
EOF
chmod +x $dirname/DEBIAN/preinst

# postinst s
cat << EOF > $dirname/DEBIAN/postinst
chown -R www-data:www-data /opt/$basename
cd /opt/$basename
php artisan migrate -n --force
php artisan cache:clear
php artisan route:clear
php artisan view:clear
php artisan config:cache
systemctl restart nginx php7.4-fpm || true
EOF
chmod +x $dirname/DEBIAN/postinst

# make the deb file
dpkg-deb --build --root-owner-group $dirname

# cleanup
rm -rf $dirname

Wrangling your scripts

With each project having its own build script, you can place a local copy of the git repository in a directory, for example in wip. You can create a directory, recipes, containing your scripts that will :

  • download the relevant repo into wip if necessary
  • pull the latest updates
  • run the build script
  • rescan the package directory to regenerate the index file
#!/bin/sh

if ! test -d wip
then mkdir wip
fi

if ! test -d builds
then mkdir builds
fi

if test -z "$1"
then
	echo 'no recipe provided'
	exit 1
elif (! test -f recipes/$1.sh)
then
	echo 'no recipe found for' $1
	exit 1
else
	for i
	do /bin/sh recipes/build/"$i".sh
	done
	cd /srv/apt
	dpkg-scanpackages -m --arch amd64 pool > dists/stable/main/binary-amd64/Packages
	cd -
fi

And here is an example recipe for a Laravel project :

. lib/lib.sh

export COMPOSER_ALLOW_SUPERUSER=1
me='coolpkg'
url="https://bitbucket.org/my-username/$me.git"

if ! test -d wip/$me
then
	old=$PWD
	cd wip
	git clone --depth=1 $url $me
  cd $me
	composer install --no-dev
	npm install
	cd $old
fi

# set env file
cat << EOF > wip/$me/.env
APP_ENV=production
CONFIGURATION_URL="$CONFIGURATION_URL"
APP_KEY=
EOF

cd wip/$me
git pull
composer update --no-dev
npm update
. build.sh
mv *deb /srv/apt/pool/main

Building a container

You will need a binary that bootstraps your distribution of choice. For debian, debootstrap bootstraps the system. If you will be bootstrapping several distributions, eg OpenSUSE, you can use mkosi, a systemd family project, which wraps all of these tools under a unified Python script.

We will install a script, entrypoint, which, on boot, will look for our application in our repository. A systemd service will wrangle it, and your application's preinst and postinst scripts will automatically enable and run your service. Here is an example for a debian image :

#!/bin/sh

name='coolpkg-server'
timestamp=$(date '+%s')
pkg=wip/$name-image_"$timestamp"_amd64
machine=$pkg/var/lib/machines/$name

# create the directory
mkdir -p $machine

# bootstrap base system
debootstrap --variant=minbase --arch amd64 sid $machine

# set a password
chroot $machine passwd root

# install some packages
chroot $machine apt-get install -y php-cli systemd

# add our repo
echo "deb [arch=amd64 trusted=yes] $BUILD_URL/apt stable main" > $machine/etc/apt/sources.list.d/buildserver.list

# add entrypoint script
echo 'apt-get update && apt-get install -y myapp' > $machine/usr/local/bin/entrypoint
chmod +x $machine/usr/local/bin/entrypoint

# service on boot
cat <<EOF > $machine/etc/systemd/system/entrypoint.service
[Unit]
Description=entrypoint
After=network-online.target
Wants=network-online.target

[Service]
Type=oneshot
RemainAfterExit=true
User=root
Group=root
ExecStart=bash /usr/local/bin/entrypoint

[Install]
WantedBy=default.target
EOF

chroot $machine systemctl enable entrypoint

# set a hostname
echo $name > $machine/etc/hostname

# nspawn for the host!
mkdir -p $pkg/etc/systemd/nspawn
cat << EOF > $pkg/etc/systemd/nspawn/$name.npsawn
[Exec]
Ephemeral=on
LinkJournal=no
PrivateUsers=no

[Network]
VirtualEthernet=no

[Files]
BindReadOnly=/run/user/0/dconf-service/keyfile/user:/root/.config/dconf/user
EOF

# setup DEBIAN
mkdir -p $pkg/DEBIAN
chmod 755 $pkg/DEBIAN

# DEBIAN control file
cat << EOF > $pkg/DEBIAN/control
Package: $name-image
Version: $(date '+%s')
Architecture: amd64
Maintainer: John Doe <test@example.com>
Description: $name Container Image
Depends: dconf-cli, dconf-service, systemd, systemd-container
EOF

# DEBIAN preinst
cat << EOF > $pkg/DEBIAN/preinst
#!/bin/sh
dconf list / > /dev/null || true
systemctl stop systemd-nspawn@$name || true
EOF
chmod +x $pkg/DEBIAN/preinst

# DEBIAN postinst
cat << EOF > $pkg/DEBIAN/postinst
systemctl enable systemd-nspawn@$name || true
systemctl start systemd-nspawn@$name || true
EOF
chmod +x $pkg/DEBIAN/postinst

# make the deb file
dpkg-deb --build --root-owner-group $pkg

# move to builds
mv wip/$name-image*deb /srv/apt/pool/main

# cleanup
rm -vrf $pkg

Serving your repository

All you need is an HTTP server with basic authentication. You can find a simple Python script sample here, and of course you can configure nginx or apache however you like.

Configuring your application

To wrap things up, you can ask the configuration server for keys and receive their values. You can save the configuration server's URL in an environment value, for example in /etc/environment during the creation of your container image.

For example, here's how you could get configuration for the database password in a Laravel environment :

'password' => env('APP_ENV') === 'production'
    ? Dconf::get('/org/myorg/myapp/db/password')
    : env('DB_PASSWORD', ''),

Given the following implementation for Dconf :

<?php
namespace App;

class Dconf {
    static function get(string $x) {
        if ($x[strlen($x) - 1] === '/') return Dconf::dump($x);
        else return Dconf::read($x);
    }

    private static function read(string $x) {
        $x = env('CONFIGURATION_URL') . $x;
        $x = shell_exec("curl -s {$x}");
        $x = str_replace("'", '"', $x);
        return json_decode($x);
    }
}

Remember to run php artisan config:cache, because otherwise you will kill your performance.

In a Lumen environment, there is no cached config. What you can do instead is periodically refresh your .env file through an additional script :

#!/bin/sh

ENVFILE=.env

fetch() {
    what="$1"
    key="$2"
    value="$(curl -s "$CONFIGURATION_URL""$what" | sed s/\'//g)"
    echo "$key=$value" >> $ENVFILE
}

doit() {
    rm $ENVFILE
    fetch /org/myorg/app/cache_driver CACHE_DRIVER
    fetch /org/myorg/app/db/connection DB_CONNECTION
    fetch /org/myorg/app/db/host DB_HOST
    # and so on
}

while true
do
    doit
    sleep 1h
done

Which you can run as a service :

[Unit]
Description=MyApp reconfiguration service
After=network-online.target
Wants=network-online.target

[Service]
Restart=always
User=root
Group=root
WorkingDirectory=/opt/myapp
EnvironmentFile=/etc/environment
ExecStart=/usr/bin/reconfigure

[Install]
WantedBy=default.target
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment