Skip to content

Instantly share code, notes, and snippets.

@jesbrd
Last active March 6, 2019 03:06
Show Gist options
  • Save jesbrd/e534c8a73c6c2167932613ef26accb1a to your computer and use it in GitHub Desktop.
Save jesbrd/e534c8a73c6c2167932613ef26accb1a to your computer and use it in GitHub Desktop.
Boondocks OS Yocto Build in an LXC Container

Boondocks OS Yocto Build in an LXC Container

boondocks-raspberrypi

The goal is to use an LXC container with Docker to automate the build process for boondocks-os.

Setup the Build Environment

Management Tasks

Research

Primary Technology Used

  • Ubuntu 16.04.4 Host OS
  • Ubuntu 16.04 Container OS
  • LXD latest version installed from snap
  • ZFS storage pool as raw block device
  • Default lxc managed network bridge (environment-specifc)
  • Yocto

Appendix

Example configurations:

Assumptions & Rules

General

  • This build process currently requires Ubuntu 16.04.x running as the container OS.
  • Ensure you have access to the VM's console during network configuration changes. (ESXi, VMWare Workstation, etc.)
  • apt update and upgrade the Host OS prior to installing and/or upgrading lxd and snap.
  • ZFS tools should be installed on the host: sudo apt install zfsutils-linux
  • Advanced LXD network configuration topics are beyond the scope of this build guide. Anything beyond a default network bridge will require additional research on the reader's part for the specific environment.
  • Kernel upgrade instructions are beyond the scope of this build guide. Please Google It! it.

Host VM Kernel

Before proceeding any further, determine if the Host VM requires a kernel version upgrade. This guide was built using kernel version: Linux 4.15.0-22-generic x86_64

To determine the current kernel version:

uname -msr

Once the kernel version is upgraded properly, proceed.

Configure Host VM

Host VM Limits Changes

The Host VM requires changes to the default limits for inodes and files. Make the following changes:

Reboot the Host VM and then continue.

Required Packages

The following packages are required on the host:

  • snap
  • LXD
  • zfsutils-linux
Installing LXD from snap

The version of LXD running on the host should be upgraded to the latest prior to sudo lxd init.

sudo snap install lxd

Reference: Installing the LXD snap

Reference: LXD is now available in the Ubuntu Snap Store

Host VM running LXD

Minimum, recommended Host VM resource allocations.

Resource Configuration
CPU Cores 6 to 12.  6 recommended minimum. The build process is very cpu intensive.
Memory 32GB  The build process is very memory intensive. Highly advise 32GB minimum allocated during active builds.

Init LXD with a ZFS Pool and Network Bridge

sudo lxd init
  • Configure a ZFS storage pool named lxd-pool using defaults; raw block device for this example is: /dev/sdb
  • Configure default LXD network bridge named lxd-bridge using appropriate IPv4 and IPv6 settings for the environment.
  • Verify external and/or internal host network connectivity.

Verify Default Profile & Copy

lxc profile show default

You should see that LXD is:

  • Using lxd-bridge for bridged networking
  • With an eth0 network interfce
  • On the ZFS storage pool called lxd-pool
config: {}
description: ""
devices:
  eth0:
    name: eth0
    nictype: bridged
    parent: lxd-bridge
    type: nic
  root:
    path: /
    pool: lxd-pool
    type: disk
name: default
used_by: []

Now copy the default profile to a new profile called: boondocks-os. This copied profile will be assigned to the boondocks-so container when it is created.

lxc profile copy default boondocks-os

Verify ZFS Storage Pool

You should see the created ZFS pool called lxd-pool on the block device: /dev/sdb.

sudo zpool status
  pool: lxd-pool
 state: ONLINE
  scan: none requested
config:

	NAME        STATE     READ WRITE CKSUM
	lxd-pool    ONLINE       0     0     0
	  sdc       ONLINE       0     0     0

errors: No known data errors

Create boondocks-os container

Below are steps to create, modify, and start the container and show its resulting configuration.

Create container assigned the previously created boondocks-os profile:

lxc init ubuntu:16.04 boondocks-os --profile boondocks-os

The container used to build the boondocks-os needs a few security setting changed.

Modify Container Configuration:

Set container to be privileged and load required kernel modules.

lxc config set boondocks-os security.nesting true
lxc config set boondocks-os security.privileged true
lxc config set boondocks-os linux.kernel_modules ip_tables

If the environment requires a specific MAC address for the container, set it as follows:

lxc config set boondocks-os volatile.eth0.hwaddr 00:16:3e:55:bf:68
raw.lxc

Docker requires additional lxc configuration changes to support the build. Edit the container configuration and add the following raw.lxc keys as a child section of config. Syntax is yaml.

To add new raw.lxc keys:

lxc config edit boondocks-os
raw.lxc: |-
  lxc.apparmor.profile=unconfined
  lxc.cgroup.devices.allow=a
  lxc.cap.drop=

See below for a sample container configuration showing the raw.lxc keys added.

Docker Disk Device

Docker will by default startup using the vfs storage driver when running on a ZFS storage pool. This does not provide a compatible backing filesystem to support the build. Adding a lxc disc device to the container will allow Docker to use the much preferred overlay2 storage driver.

Add a new disk device to the boondocks-os container supplying a valid path value for source. Path value will be specifc to the environment / Host OS.

mkdir -p /lxc/boondocks-os/docker/
lxc config device add boondocks-os docker disk source=/lxc/boondocks-os/docker/ path=/var/lib/docker

Start Container

lxc start boondocks-os

View Container Configuration:

This is a sample of the resulting container configuration.

lxc config show boondocks-os
architecture: x86_64
config:
  image.architecture: amd64
  image.description: ubuntu 16.04 LTS amd64 (release) (20180522)
  image.label: release
  image.os: ubuntu
  image.release: xenial
  image.serial: "20180522"
  image.version: "16.04"
  linux.kernel_modules: ip_tables
  raw.lxc: |-
    lxc.apparmor.profile=unconfined
    lxc.cgroup.devices.allow=a
    lxc.cap.drop=
  security.nesting: "true"
  security.privileged: "true"
  volatile.base_image: 08bbf441bb737097586e9f313b239cecbba96222e58457881b3718c45c17e074
  volatile.eth0.hwaddr: 00:16:3e:55:bf:68
  volatile.idmap.base: "0"
  volatile.idmap.next: '[]'
  volatile.last_state.idmap: '[]'
  volatile.last_state.power: RUNNING
devices:
  docker:
    path: /var/lib/docker
    source: /lxc/boondocks-os/docker/
    type: disk
ephemeral: false
profiles:
- boondocks-os
stateful: false
description: Boondocks OS Build Container

Setup boondocks-os container

Exec into the container gaining a bash shell.

lxc exec boondocks-os bash

You should now have a bash shell into the container:

root@boondocks-os:~#

Verify Network Connectivity

Verify external and potential internal host network connectivity.

Update Container OS

apt update && apt upgrade -y

Install Container Build Prerequisites

There are numerous prerequisites that are required to build the Boondocks OS using Yocto. (Some of these may already be installed on the host system.) This list of prerequisites is an aggregate of everything needed vs. installing one-off at varying steps in the process.

apt install -y apt-transport-https build-essential ca-certificates chrpath cpio curl debianutils diffstat gawk git iputils-ping iptables jq make python python3 python3-pexpect python3-pip socat software-properties-common texinfo xz-utils zip
Install Node.js + npm

Follow the general install guidelines for Ubuntu.

curl -sL https://deb.nodesource.com/setup_8.x | sudo -E bash - && apt install -y nodejs
Verify Node.js + npm Install
nodejs --version && npm --version
Install Docker

We'll be using docker.ce, not docker.io. The official docker-ce package now supports LXC extensions that are required to properly run Docker containers inside an LXC container.

Follow the general install guidelines for Ubuntu 16.04.

https://docs.docker.com/install/linux/docker-ce/ubuntu/

The actual steps used are also listed below.

curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add -
sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
apt update && apt install -y docker-ce
Verify Docker Install
docker info
Containers: 0
 Running: 0
 Paused: 0
 Stopped: 0
Images: 0
Server Version: 18.03.1-ce
Storage Driver: overlay2
 Backing Filesystem: extfs
 Supports d_type: true
 Native Overlay Diff: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
 Volume: local
 Network: bridge host macvlan null overlay
 Log: awslogs fluentd gcplogs gelf journald json-file logentries splunk syslog
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 773c489c9c1b21a6d78b5c538cd395416ec50f88
runc version: 4fc53a81fb7c994640722ac585fa9ca548971871
init version: 949e6fa
Security Options:
 apparmor
 seccomp
  Profile: default
Kernel Version: 4.15.0-22-generic
Operating System: Ubuntu 16.04.4 LTS
OSType: linux
Architecture: x86_64
CPUs: 8
Total Memory: 31.41GiB
Name: boondocks-os-builder
ID: 6NSR:4MD7:VS3D:O46V:5SRK:3SUL:DG2C:5R3O:VUUP:V6X7:LCPE:S2PJ
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
Labels:
Experimental: false
Insecure Registries:
 127.0.0.0/8
Live Restore Enabled: false
docker version
Client:
 Version:      18.03.1-ce
 API version:  1.37
 Go version:   go1.9.5
 Git commit:   9ee9f40
 Built:        Thu Apr 26 07:17:20 2018
 OS/Arch:      linux/amd64
 Experimental: false
 Orchestrator: swarm

Server:
 Engine:
  Version:      18.03.1-ce
  API version:  1.37 (minimum version 1.12)
  Go version:   go1.9.5
  Git commit:   9ee9f40
  Built:        Thu Apr 26 07:15:30 2018
  OS/Arch:      linux/amd64
  Experimental: false
Verify Docker Runtime

Run the example hello-world.

docker run hello-world

Look at the image that was downloaded for hello-world.

docker image list

Cleanup the hello-world example.

docker system prune --force && \
docker image rm hello-world && \
docker image list

Create builder User

This user runs manual builds from the shell and is used for custom CI platform integrations.

builder should be a member of the following groups: sudo, docker.

adduser builder && \
usermod -aG sudo builder && \
usermod -aG docker builder

Switch to builder User

su - builder

Set PATH for builder User

The $PATH needs to be updated by adding /sbin to gain access to /sbin/iptables.

nano .profile

Add /sbin to the beginning of the path as follows:

PATH="/sbin:$HOME/bin:$HOME/.local/bin:$PATH"

Wrap Up

At this point, logout of the boondocks-os container so a snapshot can be taken. This snapshot can be restored or used to create another container instance.

Snapshot the boondocks-os container.

Next, a manual source build will be done as the builder user inside the container to fix any remaining host dependencies, etc.

Manual Source Build

A manual test of the build process will be performed to ensure all system dependencies, libraries, configurations, etc. are in place. This will also ensure enough hardware resources have been allocated to the VM.

Run a manual source build.

Delete Existing LXD Storage Pool

When the LXD storage pool needs to be recreated for whatever reason, the following is a guide to delete the existing storage pool and associated volumes. This is a pre-step before wiping the block device used for the storge pool.

The lxd-pool will be deleted as well as it's associated volumes and profiles.

All containers must be deleted prior to running these commands.

Identify the Storage Pool, Volumes, and Profiles

lxc storage list
+----------+-------------+--------+----------+---------+
|   NAME   | DESCRIPTION | DRIVER |  SOURCE  | USED BY |
+----------+-------------+--------+----------+---------+
| lxd-pool |             | zfs    | lxd-pool | 2       |
+----------+-------------+--------+----------+---------+
lxc storage volume list lxd-pool
+-------+------------------------------------------------------------------+-------------+---------+
| TYPE  |                               NAME                               | DESCRIPTION | USED BY |
+-------+------------------------------------------------------------------+-------------+---------+
| image | 08bbf441bb737097586e9f313b239cecbba96222e58457881b3718c45c17e074 |             | 1       |
+-------+------------------------------------------------------------------+-------------+---------+
lxc profile list
+---------+---------+
|  NAME   | USED BY |
+---------+---------+
| default | 0       |
+---------+---------+

Delete Associated Volumes

lxc storage volume delete lxd-pool image/08bbf441bb737097586e9f313b239cecbba96222e58457881b3718c45c17e074

Sample output:

Storage volume image/08bbf441bb737097586e9f313b239cecbba96222e58457881b3718c45c17e074 deleted

Delete Associated Profiles

lxc profile delete default

Sample output:

Profile default deleted

Delete the Storage Pool

lxc storage delete lxd-pool

Sample output:

Storage pool lxd-pool deleted

Run a Manual Source Build

Gain shell access to the boondocks-os container as the builder user.

Pathes may vary. Adjust as needed.

From the Host OS:

lxc exec boondocks-os bash
su - builder

If the repo has not yet been manually clonded, clone it.

Clone Source Repo

mkdir -p ~/repos && cd ~/repos && \
git clone --recursive https://github.com/Boondocks/boondocks-raspberrypi.git && \
cd boondocks-raspberrypi

You should now be sitting inside the repo directory.

If not:

cd ~/repos/boondocks-raspberrypi

Run Development Build

./build.sh

Run Production Build

./build-prod.sh

Snapshot boondocks-os Container

The boondocks-os container should have snapshots taken from a known good state.

Benefits:

  • Recovery
  • Testing
  • Copy/Cloning containers

The container will be stopped, snapped, and then started again.

From the Host OS:

lxc stop boondocks-os && \
lxc snapshot boondocks-os snapshot-name-goes-here && \
lxc info boondocks-os && \
lxc start boondocks-os

Test Docker-in-Docker Writes Inside LXC Container

Below are partial code snips from the full Yocto build being used to spin up a test dind container.

In addition, the deprecated call to docker daemon has been replaced by calling dockerd instead. Build warning is eliminated.

Create Test Container

To create a test container, use the steps to create the official boondocks-os container. The only change will be the container's name. For this test, the container is called dind-test.

Exec into dind-test

The test container is now created.

lxc exec dind-test bash

In the root of the home dir for the root user, create the below Dockerfile and entry.sh script.

In addition, create a directory called docker.

-rw-r--r--  1 root root  150 Jun  8 16:25 Dockerfile
drwxr-xr-x  2 root root    2 Jun  8 16:45 docker/
-rw-r--r--  1 root root  748 Jun  8 17:27 entry.sh

Test DockerFile

FROM docker:17.03-dind
RUN apk add --update util-linux shadow && rm -rf /var/cache/apk/*
ADD entry.sh /entry.sh
RUN chmod a+x /entry.sh
CMD /entry.sh

Test entry.sh

#!/bin/sh

set -o errexit
set -o nounset

finish() {
    # Make all files owned by the build system
    chown -R "root:root" /root/docker
}

trap finish EXIT

# Start docker with the created image
echo "Starting docker daemon with [vfs] storage driver."
dockerd -s "vfs" -g /root/docker &

echo "Waiting for docker to become ready.."

STARTTIME="$(date +%s)"
ENDTIME="$STARTTIME"

while [ ! -S /var/run/docker.sock ]
do
    if [ $((ENDTIME - STARTTIME)) -le 5 ]; then
        sleep 1 && ENDTIME=$((ENDTIME + 1))
    else
        echo "Timeout while waiting for docker to come up."
        exit 1
    fi
done

echo "Docker started. Pulling Boondocks Agent image..."
docker info
docker pull boondocks/boondocks-agent-raspberrypi3:boondocks-agent-raspberrypi3-v1.3.45

Build, Run Dockerfile

docker build -t dind-test -f Dockerfile .
docker run --privileged --rm --name dind-test dind-test:latest

The output will be docker info and the Boondocks Agent image download progress.

Wipe Block Device for New Storage Pool Creation

If an LXD storage pool has already been created and the goal is to completely recreate the storage pool for whatever reason, the block device will need to be "wiped" or "reset", if you will. This will prevent storage pool creation failure during sudo lxd init.

This example uses /dev/sdb

Delete LXD Storage Pool

There are a small number of cleanup steps prior to using gdisk.

Delete Existing LXD Storage Pool

Install gdisk

sudo apt install -y gdisk

Reset Block Device

sudo gdisk /dev/sdb

From the gdisk shell...

Choose the following options as you walk through the menu selections:

x
z
y
y

That means...

  • Enter Expert mode
  • 'Zap' the block device
  • Yes to "delete structures"
  • Yes to "blank out mbr"

Wipe Remaining Parition Information

There may be remaining partition information left on the disc that will prevent sudo lxd init from properly creating a new storage pool on the block device. Let's get rid of that also.

sudo wipefs --all /dev/sdb

Sample output:

/dev/sdb: 8 bytes were erased at offset 0x00010040 (btrfs): 5f 42 48 52 66 53 5f 4d

Verify Block Device

There should be no partition information, no GPT labels, and no MBR.

sudo fdisk -l /dev/sdb

Running sudo wipefs /dev/sdb should now return no output.

Do not enter the fdisk or gdisk shell for the block device because undesirable disc information could be written that will possibly prevent the creation of a new storage pool.

Host VM /etc/security/limits.conf Additions

Add the following entries to /etc/security/limits.conf:

*               soft    nofile          1048576
*               hard    nofile          1048576
root            soft    nofile          1048576
root            hard    nofile          1048576
*               soft    memlock         unlimited
*               hard    memlock         unlimited

Based on the official LXD documentation.

Host VM /etc/sysctl.conf Additions

Add the following entries to /etc/sysctl.conf:

fs.inotify.max_queued_events = 1048576
fs.inotify.max_user_instances = 1048576
fs.inotify.max_user_watches = 1048576
vm.max_map_count = 262144

Based on the official LXD documentation.

Boondocks OS Container Configuration

Container configuration should be as follows:

lxc config show boondocks-os --expanded
architecture: x86_64
config:
  image.architecture: amd64
  image.description: ubuntu 16.04 LTS amd64 (release) (20180522)
  image.label: release
  image.os: ubuntu
  image.release: xenial
  image.serial: "20180522"
  image.version: "16.04"
  linux.kernel_modules: ip_tables
  raw.lxc: |-
    lxc.apparmor.profile=unconfined
    lxc.cgroup.devices.allow=a
    lxc.cap.drop=
  security.nesting: "true"
  security.privileged: "true"
  volatile.base_image: 08bbf441bb737097586e9f313b239cecbba96222e58457881b3718c45c17e074
  volatile.eth0.hwaddr: 00:16:3e:55:bf:68
  volatile.idmap.base: "0"
  volatile.idmap.next: '[]'
  volatile.last_state.idmap: '[]'
  volatile.last_state.power: RUNNING
devices:
  docker:
    path: /var/lib/docker
    source: /lxc/boondocks-os/docker/
    type: disk
  eth0:
    name: eth0
    nictype: bridged
    parent: lxd-bridge
    type: nic
  root:
    path: /
    pool: lxd-pool
    type: disk
ephemeral: false
profiles:
- boondocks-os
stateful: false
description: Boondocks OS Build Container

Boondocks OS ZFS Storage Pool Configuration

Host VM storage pool configuration should be as follows:

lxc storage info lxd-pool
info:
  description: ""
  driver: zfs
  name: lxd-pool
  space used: 20.87GB
  total space: 123.48GB
used by:
  containers:
  - boondocks-os
  - boondocks-os
  - boondocks-os
  - boondocks-os
  - boondocks-os
  - boondocks-os
  images:
  - 08bbf441bb737097586e9f313b239cecbba96222e58457881b3718c45c17e074
  profiles:
  - boondocks-os
  - default
sudo zpool status
  pool: lxd-pool
 state: ONLINE
  scan: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        lxd-pool    ONLINE       0     0     0
          sdc       ONLINE       0     0     0

errors: No known data errors

mkfs.ext4 Failure inside LXC Container

During the Yocto build, the following code:

fakeroot do_create_resin_data_partition() {
    # Create the ext4 partition out of ${B}/resin-data
    dd if=/dev/zero of=${B}/resin-data.img bs=1M count=0 seek=${PARTITION_SIZE}
    chown -R root:root ${B}/resin-data
    mkfs.ext4 -E lazy_itable_init=0,lazy_journal_init=0 -i 8192 -d ${B}/resin-data -F ${B}/resin-data.img
}

...is causing the following error:

mke2fs 1.43.5 (04-Aug-2017)
Discarding device blocks: done
Creating filesystem with 262144 4k blocks and 131072 inodes
Filesystem UUID: 6f033c3e-dc48-447e-bd6e-56953b88eae4
Superblock backups stored on blocks:
        32768, 98304, 163840, 229376

Allocating group tables: done
Writing inode tables: done
Creating journal (8192 blocks): done
Copying files into the device: __populate_fs: Could not allocate block in ext2 filesystem while writing file "libdb-5.3.so"
mkfs.ext4: Could not allocate block in ext2 filesystem while populating file system
WARNING: exit code 1 from a shell command.

Full source code is here.

@mrinaldhillon
Copy link

Did you resolve mkfs.ext4 Failure inside LXC Container ?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment