Skip to content

Instantly share code, notes, and snippets.

@cgwalters
Last active April 25, 2022 13:04
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save cgwalters/3cda3264a3d5bdf9c813e9eb3b72c4fa to your computer and use it in GitHub Desktop.
Save cgwalters/3cda3264a3d5bdf9c813e9eb3b72c4fa to your computer and use it in GitHub Desktop.
colin's quick investigation of ubuntu core and persistence

Ubuntu Core investigation

Following up to this tweet.

Setup

I followed the qemu instructions, using ubuntu-core-20-amd64.img.

Investigating the mount/block setup

/ is a squashfs:

root@ubuntu:~# findmnt /
TARGET SOURCE     FSTYPE   OPTIONS
/      /dev/loop0 squashfs ro,relatime
root@ubuntu:~#

Ok, lsblk output seems pretty clear:

root@ubuntu:~# lsblk
... (skipping a pile of loopback devices)
vda    252:0    0   3.6G  0 disk 
├─vda1 252:1    0     1M  0 part 
├─vda2 252:2    0   1.2G  0 part /writable/system-data/var/lib/snapd/seed
├─vda3 252:3    0   750M  0 part /run/mnt/ubuntu-boot
├─vda4 252:4    0    16M  0 part /writable/system-data/var/lib/snapd/save
└─vda5 252:5    0   1.7G  0 part /run/mnt/base/writable

And then connecting these two via loop0:

losetup /dev/loop0
/dev/loop0: [64517]:415 (/run/mnt/data/system-data/var/lib/snapd/snaps/core20_1405.snap)
root@ubuntu:~#

It sure looks to me like the root filesystem is an unauthenticated squashfs. Although, this system seems to have Secure Boot disabled (I think I passed the right bits to qemu) but perhaps any verity usage is conditional on that?

Scenario: container escape, attacker is CAP_SYS_ADMIN

Let's assume a malicious application exploits a flaw in the kernel (e.g. this one) and the attacker manages to gain real root AKA CAP_SYS_ADMIN.

It's easy to simulate this by just having a root shell. Now, as far as I can tell, Ubuntu Core does not make any very specific claims about this.

Ubuntu Core is (at least by default) intentionally flexible; applications can be installed and removed (and there are even privileged applications), and there are writable, persistent data areas. Achieving "anti-persistence" in flexible/configurable systems is a hard problem. See an old blog post I have on this.

Anyways, I tried this out by going to the obviously named /writable mount point, which is where snaps are stored. Then I locally modified the data of the nano-strict application, and verified that my changes persisted across reboots.

root@ubuntu:~# cd /writable/system-data/snap/nano-strict/
root@ubuntu:~# cp -a 32 33  # While snap uses squashfs by default, it will happily use a regular filesystem, this
root@ubuntu:~# echo -e '#!/bin/bash\necho persistent code' > 33/bin/nano
root@ubuntu:~# ln -sTfr 33 current  # Point snap at my modified code
root@ubuntu:~# (cd /var/lib/snapd/snaps && ln nano-strict_32.snap nano-strict_33.snap)  # trick snap into thinking there's another version
root@ubuntu:~# reboot
...
root@ubuntu:~# nano-strict
persisted code
root@ubuntu:~#
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment