Skip to content

Instantly share code, notes, and snippets.

@flaki
Created February 7, 2023 21:36
Show Gist options
  • Save flaki/9a6c58447e50219a9b77a7cd15433896 to your computer and use it in GitHub Desktop.
Save flaki/9a6c58447e50219a9b77a7cd15433896 to your computer and use it in GitHub Desktop.
Importing a Docker container image into Proxmox Linux Containers (LXC)

Importing a Docker container into Proxmox/LXC

Importing the container

For this we are going to be using lxc-create, which comes pre-installed on Proxmox but needs further dependencies to fetch the OCI images from Docker:

apt update && apt -y install skopeo umoci jq

After the installation it is possible to pull and import a Docker image, here we pull the Alpine 3.16.0. Proxmox does not update dot-releases so all built-in images are .0. We are going to need the base images to debug some of the issues so we start from there.

lxc-create dockerimage -t oci -- --url docker://alpine:3.16.0

Launching the created container

The command to use to launch the imported container is below. Note, that this won't work out of the box, see below for fixes to get it working:

lxc-start -f /var/lib/lxc/dockerimage/config -F --name=dockerimage --logfile ~/lxc.log --logpriority TRACE /bin/ash

-F runs the container in 'foreground' mode and attaches the terminal (if the shell does not start it will need to be killed from a second terminal, otherwise just type exit to exit).

We set up to log into $HOME/lxc.log with the most verbose TRACE granularity.

Update networking for Proxmox

Proxmox uses a different name for its network bridge for containers, so the default imported config will fail to start:

lxc-start: dockerimage: ../src/lxc/network.c: netdev_configure_server_veth: 711 No such file or directory - Failed to attach "vethWmoVXa" to bridge "lxcbr0", bridge interface doesn't exist
lxc-start: dockerimage: ../src/lxc/network.c: lxc_create_network_priv: 3427 No such file or directory - Failed to create network device
lxc-start: dockerimage: ../src/lxc/start.c: lxc_spawn: 1843 Failed to create the network
lxc-start: dockerimage: ../src/lxc/start.c: __lxc_start: 2074 Failed to spawn container "dockerimage"
lxc-start: dockerimage: ../src/lxc/tools/lxc_start.c: main: 306 The container failed to start

By default, the imported image gets a network configuration through the lxcbr0 bridge interface. On Debian systems this can be enabled through systemctl enable lxc-net, but on Proxmox the default configuration already includes the vmbr0 bridge, so we adjust the config as so:

sed s/lxcbr0/vmbr0/ /var/lib/lxc/dockerimage/config

Fixing the error messages after launch

There are periodic repeating error messages on the console after starting the container:

can't run '/sbin/openrc': No such file or directory
can't run '/sbin/openrc': No such file or directory
can't run '/sbin/openrc': No such file or directory
can't open /dev/tty5: No such file or directory
can't open /dev/tty6: No such file or directory
...
/bin/ash: can't access tty; job control turned off

Removing the extra tty-s

The tty5/tty6 errors are caused by extra tty entries in /etc/inittab, we need to remove them:

sed -i /.*tty[56]$/d /var/lib/lxc/dockerimage/rootfs/etc/inittab

Job control

The shell complains because we launch it instead of e.g. /sbin/init as process 1. This gives us a terminal, though, for further debugging.

Launching /sbin/init will also complain for it can't find openrc, so we'll need to fix that, but for that we need networking, to be able to install packages first.

Fix networking

Either inside or outside the container we will need to add the default configuration to /etc/network/interfaces. The command below will do this from the host:

cat << EOF > /var/lib/lxc/dockerimage/rootfs/etc/network/interfaces
auto eth0
iface eth0 inet dhcp
hostname \$(hostname)
EOF

Normally we would /etc/init.d/networking start, but in this image init.d is empty, so we have to bring up the network manually in the shell with:

ifup -va
run-parts /etc/network/if-pre-up.d
ip link set eth0 up
udhcpc -b -R -p /var/run/udhcpc.eth0.pid -i eth0 -x hostname:$(hostname)
udhcpc: started, v1.35.0
udhcpc: broadcasting discover
udhcpc: broadcasting discover
udhcpc: broadcasting select for 10.1.1.249, server 10.1.1.1
udhcpc: lease of 10.1.1.249 obtained from 10.1.1.1, lease time 7200
run-parts /etc/network/if-up.d

If not using Proxmox but e.g. lxc-net for bridging, remember that bridge networking is not trivial. You may also need to manually start udhcpc before you ifup on the network interface.

Starting the container

Now we can start the container and use lxc-attach to execute commands inside and attach a console.

lxc-start -f /var/lib/lxc/dockerimage/config --name=dockerimage --logfile ~/lxc.log --logpriority DEBUG

Notice the missing -F, LXC will start the container in the background and we can interact with it using lxc-attach --name dockerimage -- whoami to run commands, or simply lxc-attach --name dockerimage to gain root console access (exit or Ctrl+A, Q to exit).

Getting OpenRC working

First of all we need to ensure the necessary packages are installed. We do the following commands in the Proxmox host again, below we add all packages that might be missing (according to the default Proxmox templates):

WORLD=/var/lib/lxc/dockerimage/rootfs/etc/apk/world
for pkg in alpine-base alpine-baselayout alpine-keys apk-tools busybox doas libc-utils logrotate; do
	grep -q "^${pkg}$" "$WORLD" || echo $pkg >> "$WORLD"
done
LC_ALL=C sort "$WORLD" -o "$WORLD"

Note: if you want to install any other packages (e.g. openssh, nano, etc.) you can specify it here.

This has only listed the packages we want, but to actually install them we will need networking to download them first.

Since we still don't have init.d, we will have to manually enable the network interfaces we configured earlier, install the required packages, and then use rc-update to add some initial processes to their respective runlevels so that they will autostart. We do this by piping all these commands into a root shell using the lxc-attach command:

cat << EOF | lxc-attach --name=dockerimage
ifdown -a
ifup -a && apk add --root=/ --initdb \$(cat /etc/apk/world)

rc-update --quiet add bootmisc boot
rc-update --quiet add hostname boot
rc-update --quiet add savecache shutdown
rc-update --quiet add killprocs shutdown
rc-update --quiet add syslog boot
rc-update --quiet add networking boot default
rc-update --quiet add crond default

reboot
EOF

After reboot all services should be functioning, and we should be check this from the console (rc-update to verify services, ip a to check networking, tail /var/log/messages for syslog):

lxc-attach --name=dockerimage

Packing up the rootfs as a Proxmox template

To make things easier we store the rootfs path as:

ROOTFS=/var/lib/lxc/dockerimage/rootfs

Update templated files

Proxmox changes some files to apply the configuration specified from the UI/CLI, so we make sure that the anchors its looking for are where they are supposed to be.

First of all we stop the container:

lxc-stop --name=dockerimage

Then we...

# rewrite the /etc/hosts file
cat << EOF > $ROOTFS/etc/hosts
127.0.1.1	LXC_NAME
127.0.0.1	localhost localhost.localdomain
::1		localhost localhost.localdomain
EOF

# update /etc/hostname
echo "LXC_NAME" > $ROOTFS/etc/hostname

# clean out the root shell history, logfile, etc.
echo -n > $ROOTFS/root/.ash_history

rm $ROOTFS/var/log/messages*
echo -n > $ROOTFS/var/log/messages

# clean out /run
cd $ROOTFS/run/
rm -rf $(ls -A)

There might be others I missed but these are the more important ones.

Create the compressed container template

Templates are just (usually compressed) tar archive of the rootfs, we use tar to compress our rootfs and place it into the Proxmox template directory (on the default storage, this is at /var/lib/vz/template/cache):

TEMPLATEDIR=/var/lib/vz/template/cache
tar -cJf $TEMPLATEDIR/alpine-3.16-docker_20220724_amd64.tar.xz -p --sparse -C $ROOTFS $(ls -A $ROOTFS)

With the template created we no longer need the lxc image:

lxc-destroy dockerimage

Now the GUI or pct create can be used to create a new image from the template!


Sources

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment