Skip to content

Instantly share code, notes, and snippets.

@TG9541
Last active February 22, 2024 05:21
Show Gist options
  • Save TG9541/3d5f484a5bc5cd9142c493018d4fa65d to your computer and use it in GitHub Desktop.
Save TG9541/3d5f484a5bc5cd9142c493018d4fa65d to your computer and use it in GitHub Desktop.
Ubuntu Server an an Futro S720 with LXD and Docker - setup notes

LXD on Futro S720, basic installation

The following notes assume a Futro S720 "thin client" with a dual core AMD GX-217GA SOC, 2GiB DDR3 RAM and the standard "2GB" MSATA SDD. There is no way to install Ubuntu 22.04 Server on the 2GB SDD (the attempt to install "Ubuntu Server minimal" resulted in a crash), I connected a 5V SATA power supply cable to pins 1 and 7 of the "2xUSB" header and installed a HDD. The 10 pin 2xUSB header, next to MSATA, has the following pinout: 1/2: 5V, 7/8: Gnd, 9:(key), 10: 3.3V.

What we have is:

  • Ubuntu Server 22.04.3 on a 4GB thumb drive
  • Normal installation on HDD
  • Disk partitioning:
    • 2GB MSATA disk partitioned as
      • /boot (auto size)
      • swap (rest)
    • 500GB 2.5" HDD WD partitioned as:
      • / /dev/sda1 20GB
      • /dev/sda2 rest (unformated partition) for ZFS pool default in lxc init
  • USB WLAN dongle for WAN Network wlx4ce676b72edd (Sony)
  • GB Ethernet enp1s0
  • disable cloud-init service with sudo touch /etc/cloud/cloud-init.disabled

Test a bit and then reboot.

The installation was done using a 4GiB DDR3 S0-DIMM but later operation was tested with the original 2GiB RAM module: playing with LXD works (e.g., running a few LXC containers with a single Ubuntu VM). I assume that the installation works fine with the original 2GiB module. Any real workload, e.g., running Canonical MAAS with its Postgres DB, requires at least 4GiB RAM.

LXD configuration

Bridging

lxc init afforts creating a bridge device, by default that's lxdbr0:

root@s720-1:/etc/netplan# networkctl 
IDX LINK            TYPE     OPERATIONAL SETUP      
  1 lo              loopback carrier     unmanaged
  2 enp1s0          ether    no-carrier  configuring
  3 wlx4ce676b72edd wlan     routable    configured 
  4 lxdbr0          bridge   routable    unmanaged
  6 veth904f4461    ether    carrier     unmanaged

Understandig bridging

First of all, it's important to know what a bridge is.

There are potentially outdated (and misleading) explainations of lxd bridging out there (e.g., netplan generate is included in netplan apply, and netplan try might be more appropriate). LXD networking: lxdbr0 explained helps to understand what's really intended.

When settings need to be changed that's likely to happe with the lxc network set <network> <key> <value>. The keys are explained in the LXD network-bridge configuration docs.

Bridging notes

I tried to re-run the bridge configuration in lxc init. Sinc lxdbr0 was already in use I created a new bridge lxdbr1:

root@s720-1:/etc/default# lxc network show lxdbr1
config:
  ipv4.address: 10.55.62.1/24
  ipv4.nat: "true"
  ipv6.address: fd42:3473:56a5:c29c::1/64
  ipv6.nat: "true"
description: ""
name: lxdbr1
type: bridge
used_by:
- /1.0/profiles/default
managed: true
status: Created
locations:
- none

Removing the bridge required "un-using" it first:

root@s720-1:/etc/default# lxc network detach-profile lxdbr1 default
root@s720-1:/etc/default# lxc network delete lxdbr1
Network lxdbr1 deleted

Bridging WIFI limitations (?)

Bridging using WIFI might be limited as the tutorial on using Ubuntu Fan, a kernel based network mapping between internal (LXD) and external (hosts in a network) IP addresses (effectively providing a poor-man's SDN): Easy overlay networking in LXD with the Ubuntu Fan.

Reading up on Netplan

The netplan tutorial uses a LXD provisioned VM with a bridge device on the host level.

The netplan apply docs mention that netplan only works in the "yaml-to-backend" directon regarding virtual devices (e.g., bridges or bonds). It can create bridges but they need to be removed manually using "network backend" commands (e.g., ip link delete dev bond0) or the system/VM needs to be restarted.

@TG9541
Copy link
Author

TG9541 commented Jan 7, 2024

Installing DNSMASQ and enabling NAT

It may be needed to temporarily stop the lxd daemon:

sudo snap stop lxd.daemon
sudo lxd cluster edit

A basic installation of DNSMASQ is described here:
https://computingforgeeks.com/install-and-configure-dnsmasq-on-ubuntu/

The order of installing dnsmasq and disabling systemd-resolved has to be reversed.

# do this first
sudo apt update
sudo apt install dnsmasq
# the follow description 
sudo systemctl disable systemd-resolved
sudo systemctl stop systemd-resolved
ls -lh /etc/resolv.conf 
sudo unlink /etc/resolv.conf
echo nameserver 8.8.8.8 | sudo tee /etc/resolv.conf

sudo vim /etc/dnsmasq.conf mostly following description

keep

#listen-address=

set

interface=enp1s0
# no-hosts
dhcp-range=10.136.200.10,10.136.200.250,12h

dhcp-host=90:1b:0e:5f:a4:30,s720-2,10.136.192.9,1045d

/etc/hosts add

10.136.143.77 s720-1
#10.136.192.9 s720-2

run
sudo systemctl restart dnsmasq

Test
dig s720-2
-> s720-1 has address 10.136.143.77

edit sudo vim /etc/sysctl.conf

net.ipv4.ip_forward=1

activate with sudo sysctl -p /etc/sysctl.conf

https://askubuntu.com/questions/1443747/ubuntu-as-router-with-netplan-dnsmasq-cant-reach-websites
Activate NAT with an IPTABLE rule:
sudo iptables -t nat -A POSTROUTING -o wlp2s0 -j MASQUERADE

The important thing is that this has to happen after the upstream interface comes on-line:
https://askubuntu.com/questions/1119257/how-to-load-saved-iptables-rules-on-rebooting

/etc/networkd-dispatcher/routable.d/50-ifup-hooks:

#!/bin/sh
for d in up post-up; do
    hookdir=/etc/network/if-${d}.d
    [ -e $hookdir ] && /bin/run-parts $hookdir
done
exit 0

One more step may be needed:
https://askubuntu.com/questions/1256921/cant-get-networkd-dispatcher-scripts-working
systemctl enable networkd-dispatcher.service

Creating a cluster

https://github.com/okossuth/lxd-tutorial

@TG9541
Copy link
Author

TG9541 commented Jan 13, 2024

Moving on to Incus on Debian

Since the Linux Containers community has moved on from LXD to Incus I decided to do the same, and swap out Ubuntu for Debian in the process. Some of the learning can be carried over, some will have to be redone:

  • network related things since Ubuntu FAN appears to be problematic
  • netplan isn't the default under Debian and a more organic solution is required
  • nftables replaces iptables, ip6tables, arptables and ebtables

I decided to set up a new cluster, starting with a stand-alone node S720-4. In this phase the Ubuntu IXD cluster node S720-1 (with half-working FAN) will be used as router with DNSMASQ and NAT.

sed -r -i'.BAK' 's/^deb(.*)$/deb\1 contrib/g' /etc/apt/sources.list
apt update
apt install zfsutils-linux
# this takes some time
/sbin/modprobe zfs
zpool status

Follow Incus instructions on GitHub. Additional instructions are in the linuxcontainers tutorial (but run incus admin init and use the zfs default).

apt install curl
# install incus gpg key
mkdir -p /etc/apt/keyrings/
curl -fsSL https://pkgs.zabbly.com/key.asc -o /etc/apt/keyrings/zabbly.asc
# install incus source 
sh -c 'cat <<EOF > /etc/apt/sources.list.d/zabbly-incus-stable.sources
Enabled: yes
Types: deb
URIs: https://pkgs.zabbly.com/incus/stable
Suites: $(. /etc/os-release && echo ${VERSION_CODENAME})
Components: main
Architectures: $(dpkg --print-architecture)
Signed-By: /etc/apt/keyrings/zabbly.asc

EOF'

# install incus
apt update
apt install incus --no-install-recommends

# add user to group "incus"
sudo adduser YOUR-USERNAME incus-admin
newgrp incus-admin  

Using Incus is very similar to using LXD. The documentation at https://linuxcontainers.org/incus/docs/main is, if anything, more consistent.

@TG9541
Copy link
Author

TG9541 commented Jan 14, 2024

Enable NAT on router node after reboot

As root, create script:

cat > /usr/local/bin/nat.enable.sh 
#!/usr/bin/bash
iptables -t nat -A POSTROUTING -o wlp2s0 -j MASQUERADE

As root create service file:

cat > /etc/systemd/system/network.nat.service
[Unit]
After=network.target

[Service]
ExecStart=/usr/local/bin/nat.enable.sh

[Install]
WantedBy=default.target

Set mode and enable service:

sudo chmod 744 /usr/local/bin/nat.enable.sh 
sudo chmod 664 /etc/systemd/system/network.nat.service
sudo systemctl daemon-reload
sudo systemctl enable network.nat.service 

The result should be Created symlink /etc/systemd/system/default.target.wants/network.nat.service → /etc/systemd/system/network.nat.service.

@TG9541
Copy link
Author

TG9541 commented Jan 16, 2024

Use Keychain

apt install keychain
echo "eval \`keychain --eval id_rsa --quiet\`" >> .profile
copy id_rsa* to initiater .ssh/ 
copy id_rsa.pub to target .ssh/

@TG9541
Copy link
Author

TG9541 commented Feb 1, 2024

Create a Debian 32bit VM

Following Installing Debian over a serial console with minor corrections and adaptations.

Get the files in the ISO image

cd ~
sudo mount -o loop,ro ~/Downloads/debian-12.4.0-i386-netinst.iso /mnt
sudo cp -r /mnt tmp-iso
sudo umount /mnt

prepare for console use

Create some backups:

sudo su
cd tmp-iso/isolinux
cp isolinux.cfg isolinux.cfg.bak
cp txt.cfg txt.cfg.bak
adtxt.cfg adtxt.cfg.bak

Edit the file isolinux/isolinux.cfg and add the serial 0 115200 and console 0 options (i.e., 1st and 2nd line):

serial 0 115200
console 0
path
include menu.cfg
default vesamenu.c32

Next, edit the file isolinux/txt.cfg and add the serial 0 115200 to every line with the append option (i.e., 4th line):

label install
        menu label ^Install
        kernel /install.386/vmlinuz
        append vga=788 console=ttyS0,115200n8 initrd=/install.386/initrd.gz --- console=ttyS0,115200n8

Finally, edit the file isolinux/adtxt.cfg and add the console=ttyS0,115200n8 to every line with the append option:

label expert
        menu label E^xpert install
        kernel /install.386/vmlinuz
        append priority=low vga=788 console=ttyS0,115200n8 initrd=/install.386/initrd.gz --- console=ttyS0,115200n8
include rqtxt.cfg
label auto
        menu label ^Automated install
        kernel /install.386/vmlinuz
        append auto=true priority=critical vga=788 console=ttyS0,115200n8 initrd=/install.386/initrd.gz --- console=ttyS0,115200n8

Create ISO image

apt install xorriso
xorriso -as mkisofs -r -J -joliet-long -l -cache-inodes -isohybrid-mbr /usr/lib/ISOLINUX/isohdpfx.bin -partition_offset 16 -A "Deb12.4 i386 C" -b isolinux/isolinux.bin -c isolinux/boot.cat -no-emul-boot -boot-load-size 4 -boot-info-table -o /home/thomas/debian-12.4-i386-serial-install.iso

@TG9541
Copy link
Author

TG9541 commented Feb 4, 2024

Installing Debian i386 using a Spice console

Getting Debian installed into an empty VM from the console still needs some work. It's much easier using a Debian Desktop installation.

Install XFCE on Debian needs a tweak: a middle click on the touchpad shouldn't close windows. The xfce4-whiskermenu-plugin also helps

Here is the steps installing it with the help of a Spice console:

sudo apt install spice-client-gtk
incus init --empty debian32 --vm -c security.secureboot=false -c security.csm=true
incus config device add debian32 install disk source=/home/thomas/Downloads/debian-12.4.0-i386-netinst.iso boot.priority=10

# before restart remove install ISO
incus config device remove debian32 install

# this won't work
incus exec debian32 bash

incus console --type=vga
#log-on as root, then on the debian32 console:
mount -t 9p config /mnt/
/mnt/install.sh
umount /mnt
systemctl start incus-agent

@TG9541
Copy link
Author

TG9541 commented Feb 22, 2024

Various incus notes

Create storage pool on device and use it

incus storage create sddpool zfs source=/dev/sda
incus launch images:debian/12 -s sddpool

Mounting a host directory

incus config device add debian32 home disk source=/home/thomas/ path=/mnt/home

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment