Skip to content

Instantly share code, notes, and snippets.

@TG9541
Last active February 22, 2024 05:21
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save TG9541/3d5f484a5bc5cd9142c493018d4fa65d to your computer and use it in GitHub Desktop.
Save TG9541/3d5f484a5bc5cd9142c493018d4fa65d to your computer and use it in GitHub Desktop.
Ubuntu Server an an Futro S720 with LXD and Docker - setup notes

LXD on Futro S720, basic installation

The following notes assume a Futro S720 "thin client" with a dual core AMD GX-217GA SOC, 2GiB DDR3 RAM and the standard "2GB" MSATA SDD. There is no way to install Ubuntu 22.04 Server on the 2GB SDD (the attempt to install "Ubuntu Server minimal" resulted in a crash), I connected a 5V SATA power supply cable to pins 1 and 7 of the "2xUSB" header and installed a HDD. The 10 pin 2xUSB header, next to MSATA, has the following pinout: 1/2: 5V, 7/8: Gnd, 9:(key), 10: 3.3V.

What we have is:

  • Ubuntu Server 22.04.3 on a 4GB thumb drive
  • Normal installation on HDD
  • Disk partitioning:
    • 2GB MSATA disk partitioned as
      • /boot (auto size)
      • swap (rest)
    • 500GB 2.5" HDD WD partitioned as:
      • / /dev/sda1 20GB
      • /dev/sda2 rest (unformated partition) for ZFS pool default in lxc init
  • USB WLAN dongle for WAN Network wlx4ce676b72edd (Sony)
  • GB Ethernet enp1s0
  • disable cloud-init service with sudo touch /etc/cloud/cloud-init.disabled

Test a bit and then reboot.

The installation was done using a 4GiB DDR3 S0-DIMM but later operation was tested with the original 2GiB RAM module: playing with LXD works (e.g., running a few LXC containers with a single Ubuntu VM). I assume that the installation works fine with the original 2GiB module. Any real workload, e.g., running Canonical MAAS with its Postgres DB, requires at least 4GiB RAM.

LXD configuration

Bridging

lxc init afforts creating a bridge device, by default that's lxdbr0:

root@s720-1:/etc/netplan# networkctl 
IDX LINK            TYPE     OPERATIONAL SETUP      
  1 lo              loopback carrier     unmanaged
  2 enp1s0          ether    no-carrier  configuring
  3 wlx4ce676b72edd wlan     routable    configured 
  4 lxdbr0          bridge   routable    unmanaged
  6 veth904f4461    ether    carrier     unmanaged

Understandig bridging

First of all, it's important to know what a bridge is.

There are potentially outdated (and misleading) explainations of lxd bridging out there (e.g., netplan generate is included in netplan apply, and netplan try might be more appropriate). LXD networking: lxdbr0 explained helps to understand what's really intended.

When settings need to be changed that's likely to happe with the lxc network set <network> <key> <value>. The keys are explained in the LXD network-bridge configuration docs.

Bridging notes

I tried to re-run the bridge configuration in lxc init. Sinc lxdbr0 was already in use I created a new bridge lxdbr1:

root@s720-1:/etc/default# lxc network show lxdbr1
config:
  ipv4.address: 10.55.62.1/24
  ipv4.nat: "true"
  ipv6.address: fd42:3473:56a5:c29c::1/64
  ipv6.nat: "true"
description: ""
name: lxdbr1
type: bridge
used_by:
- /1.0/profiles/default
managed: true
status: Created
locations:
- none

Removing the bridge required "un-using" it first:

root@s720-1:/etc/default# lxc network detach-profile lxdbr1 default
root@s720-1:/etc/default# lxc network delete lxdbr1
Network lxdbr1 deleted

Bridging WIFI limitations (?)

Bridging using WIFI might be limited as the tutorial on using Ubuntu Fan, a kernel based network mapping between internal (LXD) and external (hosts in a network) IP addresses (effectively providing a poor-man's SDN): Easy overlay networking in LXD with the Ubuntu Fan.

Reading up on Netplan

The netplan tutorial uses a LXD provisioned VM with a bridge device on the host level.

The netplan apply docs mention that netplan only works in the "yaml-to-backend" directon regarding virtual devices (e.g., bridges or bonds). It can create bridges but they need to be removed manually using "network backend" commands (e.g., ip link delete dev bond0) or the system/VM needs to be restarted.

@TG9541
Copy link
Author

TG9541 commented Nov 23, 2023

Install Docker in LXC container

Following tutorial Running Docker inside of a LXD container:

Create the Docker host dhost and provide BTRFS partition through the ZFS default pool:

thomas@s720-1:~$ lxc launch ubuntu:22.04 dhost
thomas@s720-1:~$ lxc storage create docker btrfs
thomas@s720-1:~$ lxc storage volume create docker dhost
Storage volume dhost created
thomas@s720-1:~$ lxc config device add dhost docker disk pool=docker source=dhost path=/var/lib/docker
Device docker added to dhost
thomas@s720-1:~$ lxc config set dhost security.nesting=true security.syscalls.intercept.mknod=true security.syscalls.intercept.setxattr=true
thomas@s720-1:~$ lxc restart dhost

The tutorial shows installation by a simple paste from Docker.com Install Docker on Ubuntu. For me installing required copying line by line. It's certainly better to do that with a provisioning script:

# Add Docker's official GPG key:
sudo apt-get update
sudo apt-get install ca-certificates curl gnupg
sudo install -m 0755 -d /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
sudo chmod a+r /etc/apt/keyrings/docker.gpg

# Add the repository to Apt sources:
echo \
  "deb [arch="$(dpkg --print-architecture)" signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu \
  "$(. /etc/os-release && echo "$VERSION_CODENAME")" stable" | \
  sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt-get update

# To install the latest version, run:
sudo apt-get install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin

# Verify that the Docker Engine installation is successful:
sudo docker run hello-world

Run Redis in dhost:

Inside dhost, in a 2nd shell (tmux) run root@dhost:~# docker run -d -p 6379:6379 redis (the server should run in the background, and it shouldn't use persistent data).

On the host the result should be this:

thomas@s720-1:~$ lxc list
+-------+---------+----------------------+-----------------------------------------------+-----------+-----------+
| NAME  |  STATE  |         IPV4         |                     IPV6                      |   TYPE    | SNAPSHOTS |
+-------+---------+----------------------+-----------------------------------------------+-----------+-----------+
| dhost | RUNNING | 172.17.0.1 (docker0) | fd42:e8a2:640b:3029:216:3eff:febf:3433 (eth0) | CONTAINER | 0         |
|       |         | 10.68.4.133 (eth0)   |                                               |           |           |
+-------+---------+----------------------+-----------------------------------------------+-----------+-----------+
| trash | RUNNING | 10.68.4.21 (eth0)    | fd42:e8a2:640b:3029:216:3eff:feda:cd9f (eth0) | CONTAINER | 0         |
+-------+---------+----------------------+-----------------------------------------------+-----------+-----------+

thomas@s720-1:~$ telnet 10.68.4.133 6379
Trying 10.68.4.133...
Connected to 10.68.4.133.
Escape character is '^]'.
set a b
+OK
get a
$1
b
^]
telnet> quit
Connection closed.

@TG9541
Copy link
Author

TG9541 commented Nov 25, 2023

The S720 "front panel connector" corresponds to the "Intel 10-pin front panel connector":
image
image

@TG9541
Copy link
Author

TG9541 commented Nov 26, 2023

The S720 has constraints similar to a Raspberry PI4 with 2GB RAM where in Ubuntu 22.04 zswap is enabled by default. Thus the zswap CPU/IO trade-off can be expected to improved performance, at the very least it reduces SSD wear and also provides more memory (mem+zswap+swap).

I decided also to deactivate "mitigations" and the changed /etc/default/grub to:

GRUB_CMDLINE_LINUX_DEFAULT="mitigations=off zswap.enabled=1 zswap.compressor=lz4 zswap.max_pool_percent=20 accept_threshold_percent=85 zswap.zpool=z3fold"

In the Ubuntu 22.04 Server kernel z3fold is not enabled. In the case above the kernel falls back to lz4/zbud. Here is how to fix it:

sudo su
update-grub
echo z3fold >> /etc/initramfs-tools/modules
update-initramfs -u

I checked /boot/grub/grub.cfg for the changes (it might be necessary to remove /etc/default/grub.ucf-dist).

Whether zswap is active can be checked with dmesg:

homas@s720-1:~$ sudo dmesg|grep zswap:
[    3.135547] zswap: loaded using pool lz4/z3fold

The workings of zswap can be monitored in /sys/kernel/debug/zswap:

thomas@s720-1:~$ free
               total        used        free      shared  buff/cache   available
Mem:         3897932     1557512      840496        2064     1499924     2067296
Swap:        1258492           0     1258492
thomas@s720-1:~$ sudo cat /sys/kernel/debug/zswap/stored_pages
0
thomas@s720-1:~$ sudo cat /sys/kernel/debug/zswap/pool_total_size 
0
thomas@s720-1:~$ sudo cat /sys/kernel/debug/zswap/same_filled_pages 
0

Note: on a S720 with 2G of RAM a maximum of two idle Ubuntu:2022.04 VMs can be launched. When starting a third VM on this dual-core machine the CPU load rises a lot and instability has to be expected. A fourth VM will block the system and automated restart on reboot will make repair difficult. It's possible that the Memory/CPU trade-off leads to a harder upper limit when zswap is used.

@TG9541
Copy link
Author

TG9541 commented Nov 29, 2023

Working with image repositories isn't in the initial tutorials. Here is a quick walk-through:

thomas@s720-1:~$ lxc remote list
+-----------------+------------------------------------------+---------------+-------------+--------+--------+--------+
|      NAME       |                   URL                    |   PROTOCOL    |  AUTH TYPE  | PUBLIC | STATIC | GLOBAL |
+-----------------+------------------------------------------+---------------+-------------+--------+--------+--------+
| images          | https://images.linuxcontainers.org       | simplestreams | none        | YES    | NO     | NO     |
+-----------------+------------------------------------------+---------------+-------------+--------+--------+--------+
| local (current) | unix://                                  | lxd           | file access | NO     | YES    | NO     |
+-----------------+------------------------------------------+---------------+-------------+--------+--------+--------+
| ubuntu          | https://cloud-images.ubuntu.com/releases | simplestreams | none        | YES    | YES    | NO     |
+-----------------+------------------------------------------+---------------+-------------+--------+--------+--------+
| ubuntu-daily    | https://cloud-images.ubuntu.com/daily    | simplestreams | none        | YES    | YES    | NO     |
+-----------------+------------------------------------------+---------------+-------------+--------+--------+--------+

Running lxc image list images: returns a list of images on images.linuxcontainers.org. Getting information on specific images is also possible:

thomas@s720-1:~$ lxc image list images:busybox
+-------------------------------+--------------+--------+---------------------------------------+--------------+-----------+--------+-------------------------------+
|             ALIAS             | FINGERPRINT  | PUBLIC |              DESCRIPTION              | ARCHITECTURE |   TYPE    |  SIZE  |          UPLOAD DATE          |
+-------------------------------+--------------+--------+---------------------------------------+--------------+-----------+--------+-------------------------------+
| busybox/1.36.1 (3 more)       | 84370f56cf63 | yes    | Busybox 1.36.1 amd64 (20231128_06:00) | x86_64       | CONTAINER | 1.07MB | Nov 28, 2023 at 12:00am (UTC) |
+-------------------------------+--------------+--------+---------------------------------------+--------------+-----------+--------+-------------------------------+
| busybox/1.36.1/arm64 (1 more) | 55527ef587be | yes    | Busybox 1.36.1 arm64 (20231128_06:00) | aarch64      | CONTAINER | 0.90MB | Nov 28, 2023 at 12:00am (UTC) |
+-------------------------------+--------------+--------+---------------------------------------+--------------+-----------+--------+-------------------------------+

Running a BusyBox LXC system container is possible although it's of limited use since unless a software repository is available:

lxc launch --vm  images:busybox/1.36.1 vm-bb

Creating own local images is also quite easy.

@TG9541
Copy link
Author

TG9541 commented Jan 7, 2024

Installing DNSMASQ and enabling NAT

It may be needed to temporarily stop the lxd daemon:

sudo snap stop lxd.daemon
sudo lxd cluster edit

A basic installation of DNSMASQ is described here:
https://computingforgeeks.com/install-and-configure-dnsmasq-on-ubuntu/

The order of installing dnsmasq and disabling systemd-resolved has to be reversed.

# do this first
sudo apt update
sudo apt install dnsmasq
# the follow description 
sudo systemctl disable systemd-resolved
sudo systemctl stop systemd-resolved
ls -lh /etc/resolv.conf 
sudo unlink /etc/resolv.conf
echo nameserver 8.8.8.8 | sudo tee /etc/resolv.conf

sudo vim /etc/dnsmasq.conf mostly following description

keep

#listen-address=

set

interface=enp1s0
# no-hosts
dhcp-range=10.136.200.10,10.136.200.250,12h

dhcp-host=90:1b:0e:5f:a4:30,s720-2,10.136.192.9,1045d

/etc/hosts add

10.136.143.77 s720-1
#10.136.192.9 s720-2

run
sudo systemctl restart dnsmasq

Test
dig s720-2
-> s720-1 has address 10.136.143.77

edit sudo vim /etc/sysctl.conf

net.ipv4.ip_forward=1

activate with sudo sysctl -p /etc/sysctl.conf

https://askubuntu.com/questions/1443747/ubuntu-as-router-with-netplan-dnsmasq-cant-reach-websites
Activate NAT with an IPTABLE rule:
sudo iptables -t nat -A POSTROUTING -o wlp2s0 -j MASQUERADE

The important thing is that this has to happen after the upstream interface comes on-line:
https://askubuntu.com/questions/1119257/how-to-load-saved-iptables-rules-on-rebooting

/etc/networkd-dispatcher/routable.d/50-ifup-hooks:

#!/bin/sh
for d in up post-up; do
    hookdir=/etc/network/if-${d}.d
    [ -e $hookdir ] && /bin/run-parts $hookdir
done
exit 0

One more step may be needed:
https://askubuntu.com/questions/1256921/cant-get-networkd-dispatcher-scripts-working
systemctl enable networkd-dispatcher.service

Creating a cluster

https://github.com/okossuth/lxd-tutorial

@TG9541
Copy link
Author

TG9541 commented Jan 13, 2024

Moving on to Incus on Debian

Since the Linux Containers community has moved on from LXD to Incus I decided to do the same, and swap out Ubuntu for Debian in the process. Some of the learning can be carried over, some will have to be redone:

  • network related things since Ubuntu FAN appears to be problematic
  • netplan isn't the default under Debian and a more organic solution is required
  • nftables replaces iptables, ip6tables, arptables and ebtables

I decided to set up a new cluster, starting with a stand-alone node S720-4. In this phase the Ubuntu IXD cluster node S720-1 (with half-working FAN) will be used as router with DNSMASQ and NAT.

sed -r -i'.BAK' 's/^deb(.*)$/deb\1 contrib/g' /etc/apt/sources.list
apt update
apt install zfsutils-linux
# this takes some time
/sbin/modprobe zfs
zpool status

Follow Incus instructions on GitHub. Additional instructions are in the linuxcontainers tutorial (but run incus admin init and use the zfs default).

apt install curl
# install incus gpg key
mkdir -p /etc/apt/keyrings/
curl -fsSL https://pkgs.zabbly.com/key.asc -o /etc/apt/keyrings/zabbly.asc
# install incus source 
sh -c 'cat <<EOF > /etc/apt/sources.list.d/zabbly-incus-stable.sources
Enabled: yes
Types: deb
URIs: https://pkgs.zabbly.com/incus/stable
Suites: $(. /etc/os-release && echo ${VERSION_CODENAME})
Components: main
Architectures: $(dpkg --print-architecture)
Signed-By: /etc/apt/keyrings/zabbly.asc

EOF'

# install incus
apt update
apt install incus --no-install-recommends

# add user to group "incus"
sudo adduser YOUR-USERNAME incus-admin
newgrp incus-admin  

Using Incus is very similar to using LXD. The documentation at https://linuxcontainers.org/incus/docs/main is, if anything, more consistent.

@TG9541
Copy link
Author

TG9541 commented Jan 14, 2024

Enable NAT on router node after reboot

As root, create script:

cat > /usr/local/bin/nat.enable.sh 
#!/usr/bin/bash
iptables -t nat -A POSTROUTING -o wlp2s0 -j MASQUERADE

As root create service file:

cat > /etc/systemd/system/network.nat.service
[Unit]
After=network.target

[Service]
ExecStart=/usr/local/bin/nat.enable.sh

[Install]
WantedBy=default.target

Set mode and enable service:

sudo chmod 744 /usr/local/bin/nat.enable.sh 
sudo chmod 664 /etc/systemd/system/network.nat.service
sudo systemctl daemon-reload
sudo systemctl enable network.nat.service 

The result should be Created symlink /etc/systemd/system/default.target.wants/network.nat.service → /etc/systemd/system/network.nat.service.

@TG9541
Copy link
Author

TG9541 commented Jan 16, 2024

Use Keychain

apt install keychain
echo "eval \`keychain --eval id_rsa --quiet\`" >> .profile
copy id_rsa* to initiater .ssh/ 
copy id_rsa.pub to target .ssh/

@TG9541
Copy link
Author

TG9541 commented Feb 1, 2024

Create a Debian 32bit VM

Following Installing Debian over a serial console with minor corrections and adaptations.

Get the files in the ISO image

cd ~
sudo mount -o loop,ro ~/Downloads/debian-12.4.0-i386-netinst.iso /mnt
sudo cp -r /mnt tmp-iso
sudo umount /mnt

prepare for console use

Create some backups:

sudo su
cd tmp-iso/isolinux
cp isolinux.cfg isolinux.cfg.bak
cp txt.cfg txt.cfg.bak
adtxt.cfg adtxt.cfg.bak

Edit the file isolinux/isolinux.cfg and add the serial 0 115200 and console 0 options (i.e., 1st and 2nd line):

serial 0 115200
console 0
path
include menu.cfg
default vesamenu.c32

Next, edit the file isolinux/txt.cfg and add the serial 0 115200 to every line with the append option (i.e., 4th line):

label install
        menu label ^Install
        kernel /install.386/vmlinuz
        append vga=788 console=ttyS0,115200n8 initrd=/install.386/initrd.gz --- console=ttyS0,115200n8

Finally, edit the file isolinux/adtxt.cfg and add the console=ttyS0,115200n8 to every line with the append option:

label expert
        menu label E^xpert install
        kernel /install.386/vmlinuz
        append priority=low vga=788 console=ttyS0,115200n8 initrd=/install.386/initrd.gz --- console=ttyS0,115200n8
include rqtxt.cfg
label auto
        menu label ^Automated install
        kernel /install.386/vmlinuz
        append auto=true priority=critical vga=788 console=ttyS0,115200n8 initrd=/install.386/initrd.gz --- console=ttyS0,115200n8

Create ISO image

apt install xorriso
xorriso -as mkisofs -r -J -joliet-long -l -cache-inodes -isohybrid-mbr /usr/lib/ISOLINUX/isohdpfx.bin -partition_offset 16 -A "Deb12.4 i386 C" -b isolinux/isolinux.bin -c isolinux/boot.cat -no-emul-boot -boot-load-size 4 -boot-info-table -o /home/thomas/debian-12.4-i386-serial-install.iso

@TG9541
Copy link
Author

TG9541 commented Feb 4, 2024

Installing Debian i386 using a Spice console

Getting Debian installed into an empty VM from the console still needs some work. It's much easier using a Debian Desktop installation.

Install XFCE on Debian needs a tweak: a middle click on the touchpad shouldn't close windows. The xfce4-whiskermenu-plugin also helps

Here is the steps installing it with the help of a Spice console:

sudo apt install spice-client-gtk
incus init --empty debian32 --vm -c security.secureboot=false -c security.csm=true
incus config device add debian32 install disk source=/home/thomas/Downloads/debian-12.4.0-i386-netinst.iso boot.priority=10

# before restart remove install ISO
incus config device remove debian32 install

# this won't work
incus exec debian32 bash

incus console --type=vga
#log-on as root, then on the debian32 console:
mount -t 9p config /mnt/
/mnt/install.sh
umount /mnt
systemctl start incus-agent

@TG9541
Copy link
Author

TG9541 commented Feb 22, 2024

Various incus notes

Create storage pool on device and use it

incus storage create sddpool zfs source=/dev/sda
incus launch images:debian/12 -s sddpool

Mounting a host directory

incus config device add debian32 home disk source=/home/thomas/ path=/mnt/home

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment