Skip to content

Instantly share code, notes, and snippets.

@airbreather
Last active April 30, 2024 12:59
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save airbreather/6b92b1515f1e3dcce02662d45c64500a to your computer and use it in GitHub Desktop.
Save airbreather/6b92b1515f1e3dcce02662d45c64500a to your computer and use it in GitHub Desktop.
arch-server-install

My Arch Server Install

Based on https://wiki.archlinux.org/title/Installation_guide, retrieved 2024-01-27.

This file will document any changes or details that I make for my own purpose(s).

Minimum needed to get out of console redirection

Important note: this same root password will be used for the system.

passwd
ip addr

Note the IP address, then remote in from the client:

ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null root@THE_IP

Get into the chroot

This ignores my zpool, since that's handled already.

Also, specifics are initially being written for the VM that I'm testing this on.

As usual, the compress=zstd option on the first mounted subvolume will apply in practice to all other mounted subvolumes. I'm discriminating based on how I would set it if this weren't the case.

fdisk -l
device=/dev/vda
vared -p 'Root device: ' -r "[$device]" device
echo 'g\nn\n\n\n+1G\nt\n1\nn\n\n\n\nw' | fdisk $device
sync
mkfs.fat -F 32 -n ESP ${device}1
mkfs.btrfs -L MAIN ${device}2
mount ${device}2 /mnt
btrfs subvolume create /mnt/@
btrfs subvolume create /mnt/@home
btrfs subvolume create /mnt/@snapshots
btrfs subvolume create /mnt/@var_log
btrfs subvolume create /mnt/@var_pacman_pkg
umount /mnt
mount -o subvol=@,compress=zstd ${device}2 /mnt
mount --mkdir ${device}1 /mnt/boot
mount --mkdir -o subvol=@home,compress=zstd ${device}2 /mnt/home
mount --mkdir -o subvol=@snapshots ${device}2 /mnt/.snapshots
mount --mkdir -o subvol=@var_log,compress=zstd ${device}2 /mnt/var/log
mount --mkdir -o subvol=@var_pacman_pkg ${device}2 /mnt/var/pacman/pkg
while systemctl show reflector | grep -q ActiveState=activating; do echo Waiting for Reflector to finish...; sleep 1s; done
echo Reflector finished
perl -pi -e 's/^#(?=(?:Color)|(?:ParallelDownloads = \d+)$)//' /etc/pacman.conf
pacstrap -PK /mnt base linux-lts dracut base-devel linux-lts-headers linux-firmware intel-ucode btrfs-progs emacs-nox git man-db man-pages texinfo openssh pacman-contrib samba nfs-utils dkms zsh devtools nginx certbot certbot-dns-cloudflare nodejs npm python
genfstab -L /mnt >> /mnt/etc/fstab
ln -sf ../run/systemd/resolve/stub-resolv.conf /mnt/etc/resolv.conf
cut -f 2 -d: /etc/shadow | head -n1 > /mnt/etc/.root-password
arch-chroot /mnt

Now you're in the chroot

pacman -S --noconfirm --asdeps bat elfutils aspnet-runtime-6.0 jellyfin-ffmpeg sqlite
cat >/root/.emacs <<END
(setq make-backup-files nil)
END
ln -sf /usr/share/zoneinfo/America/Detroit /etc/localtime
systemctl enable systemd-timesyncd.service
hwclock --systohc
perl -pi -e 's/#(?=en_US\.UTF-8 UTF-8)//' /etc/locale.gen
locale-gen
cat >/etc/locale.conf <<END
LANG=en_US.UTF-8
END
cat >/etc/hostname <<END
microwave
END
cat >/etc/systemd/network/20-wired.network <<END
[Match]
Name=enp1s0

[Network]
DHCP=yes
END
systemctl enable systemd-networkd.service systemd-resolved.service
cat >/etc/dracut.conf.d/myflags.conf << END
uefi="yes"
compress="zstd"
kernel_cmdline="root=LABEL=MAIN rootflags=subvol=@,compress=zstd"
END
for k in /usr/lib/modules/*; do dracut --kver $(basename "$k"); done
bootctl install
cat >/boot/loader/loader.conf <<END
timeout 0
console-mode keep
editor no
END
usermod -p `cat /etc/.root-password` root
useradd -m -G wheel,users -U -s /usr/bin/zsh -p `cat /etc/.root-password` joe 
rm /etc/.root-password
cat >/etc/sudoers.d/00_wheel <<END
%wheel ALL=(ALL:ALL) ALL
END
perl -pi -e 's/(?<=-march=x86-64) /-v3 / ; s/(?<=-mtune=)generic/native/ ; s/^#(RUSTFLAGS="[^"]*)"/$1 -C target-cpu=x86-64-v3"/ ; s/^#(?<prefix>MAKEFLAGS="-j)(\d+)(?<postfix>.*)$/$+{prefix}10$+{postfix}/' /etc/makepkg.conf
systemctl enable sshd.service paccache.timer
mkdir -m 0700 /home/joe/.ssh
cat >/home/joe/.ssh/authorized_keys <<END
ssh-rsa AAAAB3NzaC1yc2EAAAABJQAAAIEAr0IfmRJc7BfyCrsFijT/f6dETqrkjevl9Zmq1yCvrvPMB8zy4IGs+2U+0L8CUOQcAC1a8UEYDYCO8CUMvx0EFTdhzWSOmqMI0fLIu6CNoSysGNWhp54zrCoc1SlWopa4EB3MbL3oyOQUifTjpMdatpKP9UXf6aKNli3NVi9C3ws=
END
cat >/home/joe/.emacs <<END
(setq make-backup-files nil)
END
touch /home/joe/.zshrc
chown -R joe:joe /home/joe/.ssh /home/joe/.emacs /home/joe/.zshrc
exit

Now you're out of the chroot

umount -R /mnt
reboot

Post-Install

Ideally this should be converted to a script, but I am still building it out, so it'll be broken out into steps for now based on https://wiki.archlinux.org/title/General_recommendations

Resume through SSH

Connect to the server as joe (there should be no password prompt), then:

curl -o install-omz.sh -L https://raw.githubusercontent.com/ohmyzsh/ohmyzsh/master/tools/install.sh
cat >install-omz.sh.sha256 <<END
5018c155b9e1b6925766014efc913773ce0b551cfbf364537e120d0a2d5d1aba install-omz.sh
END
sha256sum -c install-omz.sh.sha256 || exit 1
sh install-omz.sh
rm install-omz.sh{,.sha256}
perl -pi -e 's/(?<=ZSH_THEME=")robbyrussell(?=")/strug/ ; s/# (?=(?:HYPHEN_INSENSITIVE="true")|(?:COMPLETION_WAITING_DOTS="true")|(?:DISABLE_MAGIC_FUNCTIONS="true"))//' ~/.zshrc
cat >/home/joe/.oh-my-zsh/custom/prefs.zsh <<END
export EDITOR=/usr/bin/emacs
export DIFFPROG=/usr/bin/meld
setopt appendhistory
setopt INC_APPEND_HISTORY
END

Reconnect as joe one last time.

AUR Helper

ZFS comes from the AUR. I need to be updating ZFS regularly, so I need to hold my nose here.

mkdir -p ~/src/paru-bin
pushd ~/src/paru-bin
git clone https://aur.archlinux.org/paru-bin .
makepkg -si
popd
sudo perl -pi -e 's/#(?=Chroot|SudoLoop)//' /etc/paru.conf

ZFS

ALWAYS:

paru -S zfs-dkms
sudo mkdir -p /etc/zfs/zfs-list.cache

ONLY DURING TESTING:

for i in {1..5}; do fallocate -l 10G ~/$i.img; done
sudo zpool create nas raidz1 ~/{1,2,3,4,5}.img
for f in joe kristina media plex archipelago foundryvtt foundryvtt2 archipelago2
do
    sudo zfs create -o compression=zstd nas/$f
    sudo touch /nas/$f/some-file-owned-by-$f
done
sudo chown -R 1000:1000 /nas/joe
sudo chown -R 1002:1002 /nas/kristina
sudo chown -R 117:126 /nas/media
sudo chown -R 997:997 /nas/plex
sudo chown -R 996:996 /nas/archipelago
sudo chown -R 995:995 /nas/foundryvtt
sudo chown -R 994:994 /nas/foundryvtt2
sudo chown -R 993:993 /nas/archipelago2
sudo mkdir -p /nas/media/junk/PlexCacheThing
sudo chown 0:100 /nas/media/junk
sudo chown 997:997 /nas/media/junk/PlexCacheThing
sudo mkdir -p /nas/media/jellyfin
sudo chown 117:126 /nas/media/jellyfin

ALWAYS:

sudo systemctl enable --now zfs.target zfs-import.target zfs-import-cache.service zfs-mount.service zfs-zed.service
sudo touch /etc/zfs/zfs-list.cache/nas

Generate a new initramfs on kernel upgrade

Not sure what weird thing dracut-ukify is doing. This is all I need for this setup... which is a lot more code to write here, but it's the simplest I've seen so far. Which probably means that I'm doing it wrong, but *shrugs*.

sudo mkdir /.snapshots/backup-efi
cat >/tmp/airbreather-runs-dracut-like-this.sh <<'END'
#!/usr/bin/sh

kvers=($(basename -a /usr/lib/modules/*))
for img in /boot/EFI/Linux/linux-*.efi
do
    kver_img=$(basename $img)
    found=0
    for kver in $kvers
    do
        if [[ $kver_img = "linux-$kver-"* ]]
        then
            found=1
            break
        fi

        if [[ $found = 0 ]]
        then
            mv $img /.snapshots/backup-efi/
        fi
    done
done

for kver in $kvers
do
    dracut --force --kver $kver
done
END
chmod +x /tmp/airbreather-runs-dracut-like-this.sh
sudo mv /tmp/airbreather-runs-dracut-like-this.sh /usr/local/bin/

cat >/tmp/90-airbreather-installs-dracut-like-this.hook <<END
[Trigger]
Type = Path
Operation = Install
Operation = Upgrade
Target = usr/lib/modules/*/pkgbase
Target = usr/lib/dracut/*
Target = usr/lib/systemd/systemd
Target = usr/lib/systemd/boot/efi/*.efi.stub
Target = usr/src/*/dkms.conf

[Action]
Description = Updating linux images, the airbreather way...
When = PostTransaction
Exec = /usr/local/bin/airbreather-runs-dracut-like-this.sh
NeedsTargets
END
chmod 0644 /tmp/90-airbreather-installs-dracut-like-this.hook
cat >/tmp/60-airbreather-runs-dracut-on-remove-like-this.hook <<END
[Trigger]
Type = Path
Operation = Remove
Target = usr/lib/modules/*/pkgbase
Target = usr/src/*/dkms.conf

[Action]
Description = Removing linux images...
When = PreTransaction
Exec = /usr/local/bin/airbreather-runs-dracut-like-this.sh
NeedsTargets
END
chmod 0644 /tmp/60-airbreather-runs-dracut-on-remove-like-this.hook
sudo mkdir -p /etc/pacman.d/hooks
sudo mv /tmp/90-airbreather-installs-dracut-like-this.hook /tmp/60-airbreather-runs-dracut-on-remove-like-this.hook /etc/pacman.d/hooks/

NFS

sudo systemctl enable --now nfsv4-server.service zfs-share.service
sudo zfs set sharenfs=on nas

Samba

cat >/tmp/smb.conf <<END
[global]
    usershare path = /var/lib/samba/usershares
    usershare max shares = 100
    usershare allow guests = yes
    usershare owner only = no
    workgroup = WORKGROUP
    server string = Samba Server
    server role = standalone server
    log file = /var/log/samba/log.%m
    max log size = 1000
    dns proxy = no
END
sudo mv /tmp/smb.conf /etc/samba/
sudo mkdir -p /var/lib/samba/usershares
sudo chmod +t /var/lib/samba/usershares
sudo systemctl enable --now smb.service
sudo zfs set sharesmb=on nas

Certbot

read -s 'cloudflaretoken?Cloudflare API token: '
echo
read '_ignore?Using test-mode certificates until "go mode". Press Enter to acknowledge that you are not in "go mode".'
echo
sudo mkdir -p /root/.secrets/certbot
sudo chmod 0700 /root/.secrets
sudo chmod 0700 /root/.secrets/certbot
sudo tee /root/.secrets/certbot/cloudflare.ini >/dev/null <<END
dns_cloudflare_api_token=$cloudflaretoken
END
unset cloudflaretoken
sudo chmod 0600 /root/.secrets/certbot/cloudflare.ini
sudo certbot certonly --email 'airbreather@linux.com' --test-cert --agree-tos --non-interactive --dns-cloudflare --dns-cloudflare-credentials /root/.secrets/certbot/cloudflare.ini --dns-cloudflare-propagation-seconds 30 -d 'startcodon.com' -d '*.startcodon.com'
sudo certbot certonly --email 'airbreather@linux.com' --test-cert --agree-tos --non-interactive --dns-cloudflare --dns-cloudflare-credentials /root/.secrets/certbot/cloudflare.ini --dns-cloudflare-propagation-seconds 30 -d 'airbreather.party' -d '*.airbreather.party'
sudo chmod 0755 /etc/letsencrypt/{live,archive}
sudo systemctl enable --now certbot-renew.timer

nginx

sudo groupadd www
sudo useradd -g www www
sudo mkdir -p /var/www/startcodon.com
sudo chown -R www:www /var/www
cat >/tmp/nginx.conf <<'END'
user www;
worker_processes auto;
worker_cpu_affinity auto;

events {
    multi_accept on;
    worker_connections 1024;
}

http {
    charset utf-8;
    sendfile on;
    tcp_nopush on;
    tcp_nodelay on;
    server_tokens off;
    log_not_found off;
    types_hash_max_size 4096;
    client_max_body_size 16M;

    # MIME
    include mime.types;
    default_type application/octet-stream;

    # logging
    access_log /var/log/nginx/access.log;
    error_log /var/log/nginx/error.log warn;

    # load configs
    include /etc/nginx/conf.d/*.conf;
    include /etc/nginx/sites-enabled/*;

    # redirect all HTTP to HTTPS
    server {
        listen 80 default_server;
        listen [::]:80 default_server;
        server_name _;
        return 301 https://$host$request_uri;
    }
}
END
cat >/tmp/based-on-moz-ssl.conf <<'END'
# generated 2024-01-28, Mozilla Guideline v5.7, nginx 1.17.7, OpenSSL 1.1.1k, modern configuration
# https://ssl-config.mozilla.org/#server=nginx&version=1.17.7&config=modern&openssl=1.1.1k&guideline=5.7

# guts ripped out by airbreather, just trying to get the specific security-relevant parts. I can
# configure nginx myself otherwise

ssl_session_timeout 1d;
ssl_session_cache shared:MozSSL:10m;  # about 40000 sessions
ssl_session_tickets off;

# modern configuration
ssl_protocols TLSv1.3;
ssl_prefer_server_ciphers off;

# HSTS (ngx_http_headers_module is required) (63072000 seconds)
add_header Strict-Transport-Security "max-age=63072000" always;

# OCSP stapling
ssl_stapling on;
ssl_stapling_verify on;

# airbreather 2024-01-28: the following line is not copy-pasted blindly, despite the source.
resolver 127.0.0.1;
END
mkdir /tmp/conf.d
cat >/tmp/conf.d/00-airbreather.party.conf <<'END'
server {
    server_name airbreather.party www.airbreather.party;

    listen 443 ssl http2;
    listen [::]:443 ssl http2;

    ssl_certificate /etc/letsencrypt/live/airbreather.party/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/airbreather.party/privkey.pem;
    ssl_trusted_certificate /etc/letsencrypt/live/airbreather.party/chain.pem;
    include /etc/nginx/based-on-moz-ssl.conf;
    location / {
        proxy_pass http://127.0.0.1:5000;
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection keep-alive;
        proxy_set_header Host $host;
        proxy_cache_bypass $http_upgrade;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
    }
}
END
cat >/tmp/conf.d/00-startcodon.com.conf <<'END'
server {
    server_name archipelago.startcodon.com;

    listen 443 ssl http2;
    listen [::]:443 ssl http2;

    ssl_certificate /etc/letsencrypt/live/startcodon.com/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/startcodon.com/privkey.pem;
    ssl_trusted_certificate /etc/letsencrypt/live/startcodon.com/chain.pem;
    include /etc/nginx/based-on-moz-ssl.conf;
    location / {
        # Set proxy headers
        proxy_set_header Host $host;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;

        # These are important to support WebSockets
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection "Upgrade";

        proxy_pass http://localhost:55555/;
    }
}
server {
    server_name startcodon.com www.startcodon.com;

    listen 443 ssl http2;
    listen [::]:443 ssl http2;

    ssl_certificate /etc/letsencrypt/live/startcodon.com/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/startcodon.com/privkey.pem;
    ssl_trusted_certificate /etc/letsencrypt/live/startcodon.com/chain.pem;
    include /etc/nginx/based-on-moz-ssl.conf;

    location / {
        root /var/www/startcodon.com/html;
        add_header 'Access-Control-Allow-Origin' '*' always;
        add_header 'Access-Control-Allow-Methods' 'GET, POST, OPTIONS' always;
        add_header 'Access-Control-Allow-Headers' 'DNT,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,Range' always;
        add_header 'Access-Control-Expose-Headers' 'Content-Length,Content-Range' always;
    }

    location /foundryvtt {
        # Set proxy headers
        proxy_set_header Host $host;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;

        # These are important to support WebSockets
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection "Upgrade";

        proxy_pass http://localhost:30000;
    }

    location /foundryvtt2 {
        # Set proxy headers
        proxy_set_header Host $host;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;

        # These are important to support WebSockets
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection "Upgrade";

        proxy_pass http://localhost:30001;
    }
}
END
sudo chown -R root:root /tmp/{nginx.conf,based-on-moz-ssl.conf,conf.d}
sudo mv /tmp/{nginx.conf,based-on-moz-ssl.conf,conf.d} /etc/nginx/
cat >/tmp/00-nginx-reload.sh <<END
#!/usr/bin/sh

systemctl reload nginx.service
END
chmod +x /tmp/00-nginx-reload.sh
sudo mv /tmp/00-nginx-reload.sh /etc/letsencrypt/renewal-hooks/deploy/
sudo systemctl enable --now nginx.service

Additional users and groups

# Three ways I could have done this:
# 1. renumber the clashes that Arch brings to us out-of-the-box, then chown everything in the base
#    system accordingly
# 2. accept different UID / GID values, then chown everything in the zpool accordingly
# 3. before the migration, on the source side: assign different non-clashing values, then chown
#    everything in the zpool accordingly
#
# I feel like #2 is the safest, assuming that I don't make any stupid mistakes outside the scope of
# this Gist. I don't expect any important parts of the Arch ecosystem to do anything as insane as to
# assume that specific groups have specific GID values, but the instant that I pulled an AUR helper
# into all this, I also opted-in to assuming that any given package might be doing some chaotic
# neutral things to make the underlying software work. It also should result in IDs that fall within
# the appropriate ranges as defined by the **very different** /etc/login.defs files, which *can* be
# treated as in-scope for the Arch ecosystem to make assumptions for routines that have no better
# option. So I add a step that will be obnoxious to reverse — but not impossible by any means — if I
# need to abort partway through.

sudo useradd -G users -U -m kristina
sudo useradd -r -U -m -d /var/lib/jellyfin jellyfin
sudo useradd -r -U -M -d /usr/lib/plexmediaserver plex
sudo useradd -r -U -M -d /nas/archipelago/home archipelago
sudo useradd -r -U -M -d /nas/foundryvtt/home foundryvtt
sudo useradd -r -U -M -d /nas/archipelago2/home archipelago2
sudo useradd -r -U -M -d /nas/foundryvtt2/home foundryvtt2

echo '#!/usr/bin/sh' >/home/joe/remap_ids.sh
joe_uid=$(id -u joe)
kristina_uid=$(id -u kristina)
jellyfin_uid=$(id -u jellyfin)
plex_uid=$(id -u plex)
archipelago_uid=$(id -u archipelago)
foundryvtt_uid=$(id -u foundryvtt)
archipelago2_uid=$(id -u archipelago2)
foundryvtt2_uid=$(id -u foundryvtt2)
for map in 117:$jellyfin_uid 993:$archipelago2_uid 994:$foundryvtt2_uid 995:$foundryvtt_uid 996:$archipelago_uid 997:$plex_uid 1000:$joe_uid 1002:$kristina_uid
do
    old_uid=${map%:*}
    new_uid=${map#*:}
    echo find /nas -uid $old_uid -exec chown --no-dereference $new_uid "'{}'" "';'" >> /home/joe/remap_ids.sh
done
joe_gid=$(getent group joe | cut -d: -f3)
users_gid=$(getent group users | cut -d: -f3)
kristina_gid=$(getent group kristina | cut -d: -f3)
jellyfin_gid=$(getent group jellyfin | cut -d: -f3)
plex_gid=$(getent group plex | cut -d: -f3)
archipelago_gid=$(getent group archipelago | cut -d: -f3)
foundryvtt_gid=$(getent group foundryvtt | cut -d: -f3)
archipelago2_gid=$(getent group archipelago2 | cut -d: -f3)
foundryvtt2_gid=$(getent group foundryvtt2 | cut -d: -f3)
for map in 126:$jellyfin_gid 993:$archipelago2_gid 994:$foundryvtt2_gid 995:$foundryvtt_gid 996:$archipelago_gid 997:$plex_gid 1000:$joe_gid 1002:$kristina_gid
do
    old_gid=${map%:*}
    new_gid=${map#*:}
    echo find /nas -gid $old_gid -exec chgrp --no-dereference $new_gid "'{}'" "';'" >> /home/joe/remap_ids.sh
done
echo
echo
echo
echo 'A script has been created at /home/joe/remap_ids.sh that will remap the IDs in the zpool.'
echo 'This is a DESTRUCTIVE operation, so I am taking full precautions not to run it automatically.'
echo 'Examine it before running it (which must be done as root). Good luck.'

Plex

paru -S plex-media-server
sudo mkdir -p "/var/lib/plex/Library/Application Support/Plex Media Server"
sudo chown -R plex:plex /var/lib/plex
sudo ln -s /nas/media/junk/PlexCacheThing "/var/lib/plex/Library/Application Support/Plex Media Server/Cache"
sudo systemctl start plexmediaserver.service
echo 'You will need to manually configure the Plex server.'
echo 'Instructions for remote configuration are here: https://wiki.archlinux.org/title/Plex#Configuration'
read -e '_ignored?(read that! then press Enter to continue)'

Jellyfin

sudo ln -s /nas/media/jellyfin /var/lib/jellyfin # just duplicating what I have for now...
paru -S jellyfin-server jellyfin-web
sudo systemctl enable --now jellyfin.service

Services pre-configured on zpool

cat >/tmp/archipelago.service <<END
[Unit]
Description=Archipelago

[Service]
Type=simple
WorkingDirectory=/nas/archipelago/src
ExecStart=python WebHost.py
Restart=on-failure
User=archipelago

[Install]
WantedBy=multi-user.target
END
cat >/tmp/archipelago2.service <<END
[Unit]
Description=Archipelago 2

[Service]
Type=simple
WorkingDirectory=/nas/archipelago2/Archipelago
ExecStart=python WebHost.py
Restart=on-failure
User=archipelago2

[Install]
WantedBy=multi-user.target
END
cat >/tmp/foundryvtt.service <<END
[Unit]
Description=Foundry VTT

[Service]
Type=simple
ExecStart=node /nas/foundryvtt/app/resources/app/main.js --dataPath=/nas/foundryvtt/data
Restart=on-failure
User=foundryvtt

[Install]
WantedBy=multi-user.target
END
cat >/tmp/foundryvtt2.service <<END
[Unit]
Description=Foundry VTT (Second Instance)

[Service]
Type=simple
ExecStart=node /nas/foundryvtt2/app/resources/app/main.js --dataPath=/nas/foundryvtt2/data
Restart=on-failure
User=foundryvtt2

[Install]
WantedBy=multi-user.target
END

sudo mv /tmp/{foundryvtt{,2},archipelago{,2}}.service /etc/systemd/system/
read -s '_ignore?About to enable the custom services that will only run in "go mode". This is your chance to Ctrl-C out.'
echo
sudo systemctl enable --now {foundryvtt{,2},archipelago{,2}}.service

Notes from actually doing it

  • can't completely wipe /dev/sdc because /dev/sdc3 is part of the zpool

  • /dev/sdc1 is boot, /dev/sdc2 is root

  • booted into BIOS instead of UEFI. noticed quickly

  • saw the boot looping issue again. updated BIOS to 3.50, boot looping went away.

  • needed to change it from /dev/enp1s0 to /dev/enp2s0

  • YOLOing certbot into not-test-mode. worked fine. probably owing to all the testing that I did when it was in test mode.

  • need to handle smb users after all is done. done.

  • remap_ids.sh doesn't pass -h / --no-dereference, so it never changes permissions of links. did it manually this time.

  • need to add ACL entries so the archipelago user can create encrypted web socket connections. done.

  • need to add a certbot 'deploy' renewal hook to add the ACL entry for the archipelago user. done:

    • /etc/letsencrypt/renewal-hooks/deploy/01-archipelago-acl.sh

      #!/bin/sh
      setfacl -m "u:archipelago:r" "$(realpath $RENEWED_LINEAGE/privkey.pem)"
  • Arch made it easier for me to run archipelago in a venv than the way I was running it before. this is a very good thing.

  • NFS functionally means Kerberos... but I can use it maybe unfunctionally for a while.

  • added a pacman hook to take a snapshot of the "/" subvolume at PreTransaction time:

    • all based on https://github.com/vaminakov/btrfs-autosnap

    • /etc/pacman.d/hooks/01-snapshot-root.hook

      [Trigger]
      Type = Package
      Operation = Install
      Operation = Upgrade
      Operation = Remove
      Target = *
      
      [Action]
      Description = Making BTRFS snapshot of the root...
      Depends = btrfs-progs
      When = PreTransaction
      Exec = /usr/local/bin/snapshot-root.sh
      AbortOnFail
      NeedsTargets
    • /usr/local/bin/snapshot-root.sh

      #!/bin/sh
      
      # based on https://github.com/vaminakov/btrfs-autosnap
      btrfs subvolume snapshot -r / "/.snapshots/root-auto/$(date -u --rfc-3339=ns)"
  • removal hook for dracut is wrong because of when it runs... just deleting it for now. also edited the inline for /usr/local/bin/airbreather-runs-dracut-like-this.sh above, now that I've seen how it handles an actual update

Notes from new stuff

This file is intended to contain notes for things I put in here after the initial install,

krb5

Totally failed to get this up and running, even just for NFS.

I never got it working in the end, and I tried to clean it up, but I have a guess that there are things scattered around from my various attempts.

Forgejo

Relatively painless:

  • Installed the package

  • Started the service as-is, out-of-the-box

  • Configured it normally

  • Confirmed that it's working

  • Stopped the service

  • Moved everything to the zpool... I THINK something like:

    sudo zfs create nas/forgejo
    sudo chown forgejo:forgejo /nas/forgejo
    sudo chmod 0755 /var/lib/forgejo
    sudo mv /var/lib/forgejo/* /nas/forgejo/
    sudo rmdir /var/lib/forgejo
    sudo ln -s /nas/forgejo /var/lib/forgejo
    sudo chown -h forgejo:forgejo /var/lib/forgejo
  • /etc/systemd/system/forgejo.service.d/data-directory.conf

    [Service]
    ReadWriteDirectories=/nas/forgejo
  • daemon-reload and then start the service again

  • Added to my reverse proxy config, most important details being:

    location / {
        client_max_body_size 512M;
        proxy_pass http://localhost:3000;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
    }

Docker

Initially for the Forgejo runner, but we all know how all the kool kidz like to containerize everything because they can't be bothered to deal with dependencies.

Creating group 'docker' with GID 962

Adding joe and forgejo to the docker group. Probably just need to keep forgejo in there for this.

Installing docker-compose and docker-buildx. Don't forget to docker buildx install.

read -s 'registrationToken?Forgejo runner registration token: '
echo
cat >/tmp/docker-compose.yml <<'EOF'
# Copyright 2023 The Forgejo Authors.
# SPDX-License-Identifier: MIT

version: "3"

services:
  runner-register:
    image: code.forgejo.org/forgejo/runner:3.3.0
    user: 0:0
    network_mode: host
    volumes:
      - /nas/forgejo/runner-data:/data
      - /var/run/docker.sock:/var/run/docker.sock
    command:
      - /bin/sh
      - -c
      - |
        forgejo-runner register --no-interactive --token REGISTRATION_TOKEN_HERE --name runner --instance https://git.startcodon.com
        forgejo-runner generate-config > config.yml
        sed -i -e "s|network: .*|network: host|" config.yml
        sed -i -e "s|labels: \[\]|labels: \[\"docker:docker://alpine:3.19\"\]|" config.yml ;

  runner-daemon:
    image: code.forgejo.org/forgejo/runner:3.3.0
    depends_on:
      runner-register:
        condition: service_completed_successfully
    user: "FORGEJO_UID_HERE:DOCKER_GID_HERE"
    volumes:
      - /nas/forgejo/runner-data:/data
      - /var/run/docker.sock:/var/run/docker.sock
    command: forgejo-runner --config config.yml daemon
EOF
sed -i s/REGISTRATION_TOKEN_HERE/$registrationToken/g /tmp/docker-compose.yml
sed -i s/FORGEJO_UID_HERE/$(id -u forgejo)/g /tmp/docker-compose.yml
sed -i s/DOCKER_GID_HERE/$(getent group docker | cut -d: -f3)/g /tmp/docker-compose.yml
sudo mv /tmp/docker-compose.yml /etc/forgejo/
sudo chown forgejo:forgejo /etc/forgejo/docker-compose.yml
sudo mkdir -p /nas/forgejo/runner-data
sudo chown -R forgejo:forgejo /nas/forgejo/runner-data
cat >/tmp/forgejo-runner.service <<EOF
[Unit]
Description=Forgejo runner
Requires=docker.service
After=docker.service

[Service]
Type=simple
Restart=always
User=forgejo
Group=docker
WorkingDirectory=/etc/forgejo
ExecStartPre=docker compose -f docker-compose.yml stop
ExecStart=docker compose -f docker-compose.yml up
ExecStop=docker compose -f docker-compose.yml stop

[Install]
WantedBy=multi-user.target
EOF
sudo chown root:root /tmp/forgejo-runner.service
sudo mv /tmp/forgejo-runner.service /etc/systemd/system/
cat >/tmp/Dockerfile.forgejo-runner <<'EOF'
FROM node:lts-bookworm

RUN apt-get update \
  && apt-get install -y \
    zstd zip \
  && rm -rf /var/lib/apt/lists/*
EOF
sudo chown root:root /tmp/Dockerfile.forgejo-runner
sudo mv /tmp/Dockerfile.forgejo-runner /etc/forgejo/
sudo docker build /etc/forgejo --file /etc/forgejo/Dockerfile.forgejo-runner --tag airbreather/forgejo-runner
cat >/tmp/Dockerfile.dotnet-sdk-8.0-forgejo-runner <<'EOF'
FROM airbreather/forgejo-runner

RUN wget https://packages.microsoft.com/config/debian/12/packages-microsoft-prod.deb -O packages-microsoft-prod.deb \
  && dpkg -i packages-microsoft-prod.deb \
  && rm packages-microsoft-prod.deb \
  && apt-get update \
  && apt-get install -y dotnet-sdk-8.0 \
  && rm -rf /var/lib/apt/lists/* \
  # Trigger first run experience by running arbitrary cmd
  && dotnet help
EOF
sudo chown root:root /tmp/Dockerfile.dotnet-sdk-8.0-forgejo-runner
sudo mv /tmp/Dockerfile.dotnet-sdk-8.0-forgejo-runner /etc/forgejo/
sudo docker build /etc/forgejo --file /etc/forgejo/Dockerfile.dotnet-sdk-8.0-forgejo-runner --tag airbreather/dotnet-sdk-8.0-forgejo-runner

Stalwart

Via Docker, because I need to get better at this thing.

DO THIS ONCE

sudo zfs create nas/mail
sudo useradd -r -U -m -d /nas/mail/stalwart/home stalwart
sudo mkdir -p /nas/mail/stalwart/data
sudo chown -R stalwart:stalwart /nas/mail/stalwart

This must be done for every version update... here's what I did for 0.6.0:

sudo docker pull stalwartlabs/mail-server:v0.6.0
sudo docker run -d -ti \
    -p 8969:443 \
    -p 25:25 \
    -p 587:587 \
    -p 465:465 \
    -p 143:143 \
    -p 993:993 \
    -p 4190:4190 \
    -v /nas/mail/stalwart/data:/opt/stalwart-mail \
    -v /etc/letsencrypt/live/startcodon.com/fullchain.pem:/certs/fullchain.pem \
    -v /etc/letsencrypt/live/startcodon.com/privkey.pem:/certs/privkey.pem \
    --name stalwart-mail \
    stalwartlabs/mail-server:v0.6.0
sudo docker exec -it stalwart-mail /bin/sh /usr/local/bin/configure.sh

Defaults for everything except domain stuff, configure domain and DNS as appropriate (startcodon.com main, mail.startcodon.com for the server hostname). Then...

sudo rm /nas/mail/stalwart/data/etc/certs/mail.startcodon.com/{fullchain,privkey}.pem
sudo ln -s /certs/fullchain.pem /nas/mail/stalwart/data/etc/certs/mail.startcodon.com/fullchain.pem
sudo ln -s /certs/privkey.pem /nas/mail/stalwart/data/etc/certs/mail.startcodon.com/privkey.pem
sudo docker start stalwart-mail

For whatever reason, I had to stop the container and restart it before the certs would get used.

Mail was getting flagged as spam (of course), but I did notice that the DMARC copypasta included p=none... setting that to p=reject at least allowed Proton to accept the e-mails without fear, so that's enough for me.

Updating:

  • docker pull the latest version

  • READ THE UPGRADE MANUAL, currently at https://github.com/stalwartlabs/mail-server/blob/main/UPGRADING.md (but you know how that do)

    • 0.6 to 0.7 is special, it looks like ONLY the web admin setup is documented (no configure script anymore, and the old configure script tries to download from a location that currently hits 404). wow.
  • stop the old container (docker container ls to get the ID)

  • remove the old container by that same ID

  • probably the same docker run command as above (with the version number replaced), but double-check the latest docs.

Matrix

matrix-synapse:

Creating group 'synapse' with GID 198.
Creating user 'synapse' (Matrix Synapse user) with UID 198 and GID 198

then run:

sudo zfs create nas/synapse
cd /nas/synapse
sudo chown -R synapse:synapse .
sudo -u synapse python -m synapse.app.homeserver --server-name matrix.startcodon.com --config-path /etc/synapse/homeserver.yaml --generate-config --report-stats=yes
cd
sudo chmod 0700 /nas/synapse

# just in case some guide(s) expect to see it at /var/lib/synapse...
sudo rm -rf /var/lib/synapse
sudo ln -s /nas/synapse /var/lib/synapse

add the reverse proxy to nginx config, forward port 8448, then you can start / enable synapse.service

2024-03-26: disabled synapse, it's using too much CPU idly and I'm not actually using it really.

Update 2024-03-16

Finally got around to updating the pacman package. makepkg.conf comes with several changes, which I've brought in mostly as-is, except:

  1. 90bf367e brought the system makepkg.conf in line with the version from devtools (permalink to the current latest version as I type this). This is mostly sane, with one major exception: --ultra -20 makes sense for what devtools is there to achieve, but on a user's system it's borderline ludicrous to spend so much CPU power to shrink the package files by so little relative to, say, -10.

    • Even -10 seems a little bit on the wild side for a GENERAL-PURPOSE BASELINE IN A MAINSTREAM LINUX DISTRIBUTION, but I've looked at enough benchmarks and tested out enough on my own hardware to come to the conclusion that -10 is a pretty decent tradeoff between speed and size, such that I feel comfortable going to at least that level wherever I care even a little.
    • And it's not completely unwarranted for even the baseline config to use -10, considering the context of what's going on around that flag: this is makepkg.conf, after all, so it's quite likely that this will never significantly impact users on hardware that would prefer a more conservative compression level. But --ultra -20 is ludicrous.

Plex

Removed, in favor of Jellyfin. Requested my account deleted, removed all devices, everything.

  • deleted the plex user and corresponding group

  • deleted /nas/media/junk/PlexCacheThing

  • removed everything with the old plex user's UID or its old group's GID using the following commands to see what those were (it was pretty much just /var/lib/plex and some temporary files that stopping the service didn't quite clean up perfectly, I think):

    sudo find / -path "/.snapshots/*" -prune -o -uid 970 -print >plex-uid
    sudo find / -path "/.snapshots/*" -prune -o -gid 970 -print >plex-gid
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment