Skip to content

Instantly share code, notes, and snippets.

@zman0900
Last active January 27, 2024 12:14
Show Gist options
  • Star 15 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save zman0900/b2196a5168bbf2c0b14f094a8ca84ab1 to your computer and use it in GitHub Desktop.
Save zman0900/b2196a5168bbf2c0b14f094a8ca84ab1 to your computer and use it in GitHub Desktop.

Running Unifi Controller in systemd-nspawn with cloud-init

This uses Ubuntu's server cloud image as a stateless container to run the UBNT Unifi Controller software. Configuration data is stored in a directory outside the container. Cloud-init is used to automatically set up the container image, so a new version can be dropped in with minimal effort. This should work with pretty much any modern Linux distro with systemd.

Setup

Systemd-nspawn prefers to store its machines on btrfs, so if your /var/lib/machines is not currently btrfs, you should create one and mount it there. Otherwise it will automatically create an image file at /var/lib/machines.raw and mount it.

Create a device

In my case, root is ZFS, so I will create a new sparse zvol. An LVM volume or raw partition is also fine. 5 or 10 GB should be fine unless you plan on having other machines too.

# zfs create -s -b 4k -V 100g -o compression=off nas-pool/machines

Create the btfrs filesystem on the device

# mkfs.btrfs /dev/zvol/nas-pool/machines

Mount it

Create a systemd unit to mount it: /etc/systemd/system/var-lib-machines.mount

[Unit]
Description=Virtual Machine and Container Storage

[Mount]
What=/dev/zvol/nas-pool/machines
Where=/var/lib/machines
Type=btrfs
Options=compress-force=lzo,discard,relatime

[Install]
RequiredBy=machines.target

Mount option discard will ensure that deleted data frees up space in the underlying sparse zvol, even though it is not on an SSD.

Don't forget to enable the mount unit:

# systemctl enable --now var-lib-machines.mount

Download the cloud image

Pull the latest Ubuntu 18.04 LTS cloud image.

# machinectl pull-tar https://cloud-images.ubuntu.com/bionic/current/bionic-server-cloudimg-amd64-root.tar.xz unifi

This will download to a "hidden" subvolume on btrfs, then create and clone to the actual 'unifi' subvolume. This allows repeat pulls to no have to re-download the file. Thanks to "copy on write", this won't take much extra space.

# btrfs subvolume list -ot /var/lib/machines 
ID	gen	top level	path	
--	---	---------	----	
257	885	5		.tar-https:\x2f\x2fcloud-images\x2eubuntu\x2ecom\x2fxenial\x2fcurrent\x2fxenial-server-cloudimg-amd64-root\x2etar\x2exz.\x227d2ece8-57e61b68a55c0\x22
305	1191	5		unifi

Create the config files

This will use the "NoCloud" module which can load its config from a path bind-mounted into the image. There is also an "nspawn" file to define some settings for the container. You need a directory on the host to store these files, but it doesn't have to be the same as mine:

# tree -ug /nas-pool/cloud-init/unifi/
/nas-pool/cloud-init/unifi/
├── [root     root    ]  ds-identify.cfg
├── [root     root    ]  nocloud
│   ├── [root     root    ]  meta-data
│   └── [root     root    ]  user-data
└── [root     root    ]  unifi.nspawn

The ds-identify.cfg file forces cloud-init to use the NoCloud module, rather than slowly trying many other options that won't work:

datasource: NoCloud
policy: search,found=first,maybe=none,notfound=disabled

The meta-data file defines the container hostname and ID:

instance-id: iid-unifi
local-hostname: unifi

The user-data file defines all the first-boot setup that cloud init will do to the container. This installs the unifi controller and sets up a user to allow ssh access. Login shouldn't actually be necessary, so you can leave out the users: section if you don't care about that.

The ssh_keys: section defines the ssh host keys in the container so that they remain constant across rebuilds. You will need to generate these with ssh-keygen -t <type> -C "root@unifi" -f temp.key. This can also be skipped if you don't plan to log in.

Most of the bootcmd: is also not necessary, but will speed up booting by disabling unnecessary services. The first line is more important. It masks the mongodb service so it won't run when installed - the Unifi controller will run its own instance. The rest of the lines just disable some services that are not useful. With this, these will fail and/or slow down boot, but they don't break anything.

#cloud-config
bootcmd:
 - cloud-init-per instance cmd01 ln -sf /dev/null /etc/systemd/system/mongodb.service
 - cloud-init-per instance cmd02 systemctl disable --now snapd.seeded.service
 - cloud-init-per instance cmd03 systemctl mask --now snapd.seeded.service
 - cloud-init-per instance cmd04 systemctl disable --now snapd.service
 - cloud-init-per instance cmd05 systemctl mask --now snapd.service
 - cloud-init-per instance cmd06 systemctl disable --now snapd.socket
 - cloud-init-per instance cmd07 systemctl mask --now snapd.socket
 - cloud-init-per instance cmd08 systemctl disable --now iscsi.service
 - cloud-init-per instance cmd09 systemctl disable --now iscsid.service
 - cloud-init-per instance cmd10 systemctl disable --now lvm2-monitor.service
 - cloud-init-per instance cmd11 systemctl disable --now lvm2-lvmetad.socket
 - cloud-init-per instance cmd12 systemctl disable --now lvm2-lvmpolld.socket

packages:
 - openjdk-8-jre-headless
 - unifi

apt:
  sources:
    100-ubnt-unifi.list:
      source: "deb http://www.ui.com/downloads/unifi/debian stable ubiquiti"
      keyid: 06E85760C0A52C50
    mongodb-org-3.4.list:
      source: "deb http://repo.mongodb.org/apt/ubuntu xenial/mongodb-org/3.4 multiverse"
      keyid: 0C49F3730359A14518585931BC711F9BA15703C6

users:
 - name: <you>
   groups: systemd-journal
   shell: /bin/bash
   lock_password: true
   sudo: ALL=(ALL) NOPASSWD:ALL
   ssh_authorized_keys:
    - <your ssh public key>

ssh_keys:
  rsa_private: |
    -----BEGIN RSA PRIVATE KEY-----
    blah
    -----END RSA PRIVATE KEY-----

  rsa_public: ssh-rsa blah root@unifi

  dsa_private: |
    -----BEGIN DSA PRIVATE KEY-----
    blah
    -----END DSA PRIVATE KEY-----

  dsa_public: ssh-dss blah root@unifi

  ecdsa_private: |
    -----BEGIN EC PRIVATE KEY-----
    blah
    -----END EC PRIVATE KEY-----

  ecdsa_public: ecdsa-sha2-nistp256 blah root@unifi

  ed25519_private: |
    -----BEGIN OPENSSH PRIVATE KEY-----
    blah
    -----END OPENSSH PRIVATE KEY-----

  ed25519_public: ssh-ed25519 blah root@unifi

Finally, the unifi.nspawn defines some extra settings for systemd-nspawn. These override or add to the settings in the default systemd unit file systemd-nspawn@.service. The name of this file must match the name of the machine.

The Bind= line defines where the Unifi controller will keep its database and ohter settings. UBNT recommends 10-20 GB for this directory. You can change the part of this line before the colon to point wherever you want on the host.

This also assumes you have a bridge interface already set up on your host, lanbr0 in my case, with access to your LAN. Refer to your distro's documentation for how to set one up.

[Exec]
# Writable bind mounts don't with user namespacing
PrivateUsers=no

[Files]
BindReadOnly=/nas-pool/cloud-init/unifi/ds-identify.cfg:/etc/cloud/ds-identify.cfg
BindReadOnly=/nas-pool/cloud-init/unifi/nocloud:/var/lib/cloud/seed/nocloud
Bind=/nas-pool/unifi-data:/var/lib/unifi

[Network]
Bridge=lanbr0

Once created, copy this file to /etc/systemd/nspawn/unifi.nspawn. It is worth keeping a copy elsewhere since the copy in /etc will be deleted automatically if/when you delete the container image.

Test the container

The container is now ready to boot. The first boot will take a minute or ten, depending on your hardware / internet speed.

Start the machine:

# machinectl start unifi

Watch the boot progress. Replace '-f' with '-b' to see the whole log. Look for any cloud-init errors. If there are any problems, you can view it's log at /var/lib/machines/unifi/var/log/cloud-init.log.

# journalctl -M unifi -f

Also see the unit logs. This may show any errors with your "nspawn" file:

$ journalctl -u systemd-nspawn@unifi.service

When boot is done, you should be able to access the Unifi controller at https://unifi:8443, if your LAN's DNS is working properly. If not, you can find the IP address of the container with machinectl status unifi.

Restart the container

Shutdown and restart the container to verify it works. If everything worked correctly, this second boot should be much faster since setup is complete.

# machinectl poweroff unifi
$ machinectl    <-- run this until it shows the container has actually stopped
# machinectl start unifi

Start the container at boot

# machinectl enable unifi

Maintenance

Ubuntu appears to release updated builds of this container image fairly often, as seen here. Rather than periodically logging into the container and using apt to upgrade packages, you can just rebuild the container from a new image. Since setup is automatic and data is stored outside the container, this is very fast and easy to do.

Make sure you have a backup before doing this! Your data should be preserved, but this will include a Unifi controller upgrade if they have released a new version.

Delete existing container

First shut it down:

# machinectl poweroff unifi

Wait until machinectl shows that shutdown is complete, then delete the container:

# machinectl remove unifi

Pull the new image

(Optional) Clean up any "hidden" images from previous pulls:

# machinectl clean

Now download the new image. You can just pull the same image as before if you just want to force the unifi controller to upgrade.

# machinectl pull-tar <url> unifi

Finally, copy your "nspawn" file back into place:

# cp /nas-pool/cloud-init/unifi/unifi.nspawn /etc/systemd/nspawn/

Boot the new image

Boot the container again, which will re-run the initial setup steps using cloud-init. This mean the latest version of the Unifi controller will also be installed. After boot is complete, your controller should be up and running again with all its data intact.

# machinectl start unifi

Resources

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment