Skip to content

Instantly share code, notes, and snippets.

@packerdl
Last active May 11, 2024 06:55
Show Gist options
  • Save packerdl/a4887c30c38a0225204f451103d82ac5 to your computer and use it in GitHub Desktop.
Save packerdl/a4887c30c38a0225204f451103d82ac5 to your computer and use it in GitHub Desktop.
Intel QuickSync passthrough to an unprivileged LXC container running plex.

Running Plex in an Unprivileged LXC with Intel QuickSync Passthrough

First setup an unprivileged Ubuntu container with Plex Media Server installed. Inside the container take note of the id of the plex group.

# Your Plex group's ID may be different
$ getent group plex | cut -d : -f3
998

Shutdown the container for now while we update its configuration. The /dev/dri/renderD128 is the device responsible for the Intel QuickSync VAAPI for hardware video encoding. Listing the /dev/dri directory you will see something like this:

$ ls -la /dev/dri
drwxr-xr-x  3 root root        100 Jul 10 19:23 .
drwxr-xr-x 22 root root       4.4K Jul 16 23:57 ..
drwxr-xr-x  2 root root         80 Jul 10 19:23 by-path
crw-rw----  1 root video  226,   0 Jul 10 19:23 card0
crw-rw----  1 root render 226, 128 Jul 10 19:23 renderD128

Take note that the renderD128 device is a character device denoted by the 'c' at the beginning of its permission attributes. The devices major and minor numbers are 226, 128 respectively, and is accessible by the render group. Take note of the render group's id as well.

# Again your render group ID may be different
$ getent group render | cut -d : -f3
108

Open the container's configuration file. Within a Proxmox installation it will be located in /etc/pve/lxc and have a name corresponding to the container's assigned ID.

Append the following to the configuration:

# Allow the container access to the renderD128 device identified by its type and major/minor numbers.
# The attributes 'rwm' allow the container to perform read, write and mknod operations on the device.
#
# For Proxmox 6.x (LXC 3.x):
lxc.cgroup.devices.allow: c 226:128 rwm
#
# For Proxmox 7.x (LXC 4.x uses CGroupV2):
lxc.cgroup2.devices.allow: c 226:128 rwm

# Bind mount the device from the host to the container
lxc.mount.entry: /dev/dri/renderD128 dev/dri/renderD128 none bind,optional,create=file 0 0

At this point, if the container were running, the device would be visible in the container but belong to the user and group nobody and nogroup respectively. The Plex user would not be able to write to it.

LXC helps isolate containers by mapping a set of user and group ids on the host to a set of user and group ids in the container. Normally a set of ids outside the POSIX range of 0-65535 is used on the host to map container user and groups. By default, Proxmox has LXC configured to map host user and groups 100000-165535 to container user and groups 0-65535. In the event of a container escape exploit, the malicious user from the container would not have permissions to modify the host filesystem.

So now its clear that the render group (108) on the host does not fall within the range mapped to the container. This is why the device is mounted with nobody/nogroup. We can update the container configuration further to use a custom id mapping that will map the plex group in the container (998) to the render group on the host (108).

# /etc/pve/lxc/CONTAINER_ID.conf
# In older versions of LXC, the configuration was lxc.id_map
# Syntax:
# Column 1: u/g define map for user or group ids
# Column 2: Range start for container
# Column 3: Range start for host
# Column 4: Length of range
# i.e., g 0 100000 998 = Map gids 100000-100997 on host to 0-997 in container
lxc.idmap: u 0 100000 65536
lxc.idmap: g 0 100000 998
lxc.idmap: g 998 108 1
lxc.idmap: g 999 100999 64537

The key is to define a jump in the mapping that has only one group overlapping with the host range (render <-> plex).

Finally, we have to update the /etc/subgid file to allow us to apply this custom mapping. The subuid/subgid files define which id mappings a host user can make while using LXC. The first column is the host user, the second column is the start of the host id range to be mapped, and the third column is the number of ids that can be allocated.

# /etc/subgid
root:100000:65536
root:108:1  # Add this line

A Note on Security Implications

Files on the host filesystem that are accessible by the render group could now be modified in the event of a container escape vulnerability. Few files belong to that group and this configuration is much more constrained than a privileged container, so I am comfortable with this level of risk. Make sure you understand the potential consequences before mapping container uid/gids into the commonly-used range of host ids.

@bmaximenko
Copy link

bmaximenko commented Mar 17, 2023

lxc.idmap: g 108 100108 64537

This should be "lxc.idmap: g 108 100108 65428", the rest looks good to me.

@lordhippo93
Copy link

lordhippo93 commented Mar 17, 2023

big thank you!

65536-108=65428 was the peace i was missing :)

@thespinmaster
Copy link

I accomplished this in a slightly different way which I find more elegant.

I got the first method working, mapping a specific user, but I could not get this method to work.
My render mappings between the host and the container are the same at 103

lxc.cgroup2.devices.allow: c 226:128 rwm
lxc.mount.entry: /dev/dri/renderD128 dev/dri/renderD128 none bind,optional,create=file 0 0
lxc.idmap: u 0 100000 65536
lxc.idmap: g 0 100000 102
lxc.idmap: g 103 103 1
lxc.idmap: g 104 100104 65432

id plex
~# id plex uid=1000(plex) gid=1000(plex) groups=1000(plex),103(render)

ls /dev/dri -l
total 0 crw-rw---- 1 nobody render 226, 128 Mar 30 09:41 renderD128

Testing using tdarr FFmpeg VAAPI from here, and eventually get:
[AVHWDeviceContext @ 0x55d4e5fa0180] No VA display found for device /dev/dri/renderD128. Device creation failed: -22. [h264 @ 0x55d4e5f9e180] No device available for decoder: device type vaapi needed for codec h264.

Am I missing some other magic sauce?
TIA

@bmaximenko
Copy link

bmaximenko commented Mar 31, 2023

Am I missing some other magic sauce?

Do you run tdarr from a user that belongs to render group (e.g. plex)?

@thespinmaster
Copy link

Thanks, @bmaximenko. That did the trick. I was changing the containers pid,uid's and not using the --user option. All working now. vainfo is also working which did not do using the initial method.

@Aljutor
Copy link

Aljutor commented Apr 3, 2023

Trying to pass throw intel arc a380 into lxc with archlinux from proxmox 7.4 with 6.2 kernel

989 - render group in the container

lxc.cgroup2.devices.allow: c 226:128 rwm
lxc.cgroup2.devices.allow: c 226:1 rwm
lxc.mount.entry: /dev/dri/renderD128 dev/dri/renderD128 none bind,optional,create=file 0 0
lxc.mount.entry: /dev/dri/card1 dev/dri/card0 none bind,optional,create=file 0 0 #this just for testing
lxc.idmap: u 0 100000 65536
lxc.idmap: g 0 100000 989
lxc.idmap: g 989 103 1
lxc.idmap: g 990 100990 64546

In the container:

[root@jellyfin ~]# ls -l /dev/dri/
total 0
crw-rw---- 1 nobody nobody 226,   1 Apr  3 10:38 card0
crw-rw---- 1 nobody render 226, 128 Apr  3 10:38 renderD128

/etc/subgid

root@daeron:/etc/pve/lxc# cat /etc/subgid
root:100000:65536
root:103:1

But I am still getting errors:

[root@jellyfin ~]# vainfo
Trying display: wayland
error: XDG_RUNTIME_DIR is invalid or not set in the environment.
Trying display: x11
error: can't connect to X server!
Trying display: drm
error: failed to initialize display

and in ffmpeg

[AVHWDeviceContext @ 0x556051a99680] No VA display found for device /dev/dri/renderD128.

removing this line does not help, btw

lxc.mount.entry: /dev/dri/card1 dev/dri/card0 none bind,optional,create=file 0 0 #this just for testing

If I create separate user and add him to render group (in the container) - vainfo works. If I add root to the render group - it doesn't help.

@aortmannm
Copy link

aortmannm commented May 26, 2023

Mh I dont get it to work. Can't start my lxc container ;/ probably you can help me whats wrong.

Error:

lxc_map_ids: 3701 newgidmap failed to write mapping "newgidmap: gid range [33-34) -> [103-104) not allowed": newgidmap 9310 0 100000 33 33 103 1 34 100034 65502
lxc_spawn: 1788 Failed to set up id mapping.
__lxc_start: 2107 Failed to spawn container "100"
TASK ERROR: startup for container '100' failed

Configuration:

root@cloud:/# getent group www-data | cut -d : -f3
33
root@proxmox:~# ls -la /dev/dri
total 0
drwxr-xr-x  3 root root        100 May 18 19:44 .
drwxr-xr-x 20 root root       4460 May 19 17:04 ..
drwxr-xr-x  2 root root         80 May 18 19:44 by-path
crw-rw----  1 root video  226,   0 May 18 19:44 card0
crw-rw----  1 root render 226, 128 May 18 19:44 renderD128
root@proxmox:~# getent group render | cut -d : -f3
103
# For Proxmox 7.x (LXC 4.x uses CGroupV2):
lxc.cgroup2.devices.allow: c 226:128 rwm

# Bind mount the device from the host to the container
lxc.mount.entry: /dev/dri/renderD128 dev/dri/renderD128 none bind,optional,create=file 0 0

lxc.idmap: u 0 100000 65536
lxc.idmap: g 0 100000 33
lxc.idmap: g 33 103 1
lxc.idmap: g 34 100034 65502
# /etc/subgid
root:100000:65536
root:103:1  # Add this line

@tomshomelab
Copy link

@abracadabra1111 At first glance your mappings look correct. Are you launching the LXC's as root? I also noticed you're missing lxc.cgroup2.devices.allow in your config.

Was finally able to get it to work properly. Had to follow this for 12th gen Intel H/W: https://dgpu-docs.intel.com/installation-guides/index.html#intel-gen12-dg1-gpus

what page did you follow on your link to get your working on proxmox ubuntu?

@tomshomelab
Copy link

tomshomelab commented Jul 27, 2023

i have the same error as others here with mappings of 103 instead of 107

error:
xc_map_ids: 3701 newgidmap failed to write mapping "newgidmap: gid range [103-104) -> [108-109) not allowed": newgidmap 3739942 0 100000 107 103 108 1 108 100108 65428 lxc_spawn: 1788 Failed to set up id mapping. __lxc_start: 2107 Failed to spawn container "125" TASK ERROR: startup for container '125' failed

config file;
arch: amd64 cores: 4 features: nesting=1 hostname: Plex-IGPU-V1 memory: 4096 net0: name=eth0,bridge=vmbr0,firewall=1,gw=192.168.0.1,hwaddr=72:E6:DE:10:EA:19,ip=192.168.0> ostype: ubuntu rootfs: Local-NVME:subvol-125-disk-0,size=32G swap: 512 unprivileged: 1 lxc.cgroup.devices.allow: c 226:128 rwm lxc.mount.entry: /dev/dri/renderD128 dev/dri/renderD128 none bind,optional,create=file lxc.idmap: u 0 100000 65536 lxc.idmap: g 0 100000 107 lxc.idmap: g 103 108 1 lxc.idmap: g 108 100108 65428

@abracadabra1111
Copy link

@abracadabra1111 At first glance your mappings look correct. Are you launching the LXC's as root? I also noticed you're missing lxc.cgroup2.devices.allow in your config.

Was finally able to get it to work properly. Had to follow this for 12th gen Intel H/W: https://dgpu-docs.intel.com/installation-guides/index.html#intel-gen12-dg1-gpus

what page did you follow on your link to get your working on proxmox ubuntu?

Page is deprecated now. But looks like the updated reference is here: https://dgpu-docs.intel.com/driver/installation.html

@eiqnepm
Copy link

eiqnepm commented Oct 9, 2023

Does anyone know how I can achieve this for my Debian container, so my Jellyfin server running on Docker in the container can access hardware encoding?

https://hub.docker.com/r/linuxserver/jellyfin#:~:text=Hardware%20Acceleration-,Intel,-Hardware%20acceleration%20users

@gomez4758
Copy link

I just spent hours trying to get my unprivileged LXC of Plex to work. After trying different things in this email thread, and other blogs, I got the below to work for me.

In the file /etc/pve/lxc/ID.conf add these:
	lxc.idmap: u 0 100000 65536
	lxc.idmap: g 0 100000 108
	lxc.idmap: g 108 108 1
	lxc.idmap: g 109 100109 65426
You need to also add the below to the file /etc/subgid (allows 108 to be mapped in lxc):
	root:108:1
I also did this (in the lxc Plex container), don't know if it was needed (basically added plex to different groups):
	usermod -a -G render plex
	usermod -a -G nogroup plex

Basically just mapping 108 from host to lxc. Note that in the host and lxc the group name was different but they both were the same 108 in /etc/group

host: crw-rw---- 1 root crontab 226, 128 Dec 5 22:58 renderD128
lxc: crw-rw---- 1 nobody nogroup 226, 128 Dec 5 22:58 renderD128

After doing the steps above, in the lxc it now shows up as:
lxc: crw-rw---- 1 nobody render 226, 128 Dec 5 22:58 renderD128

Hope this helps anyone who hasn't gotten it to work.

@chicungunya
Copy link

chicungunya commented Feb 26, 2024

Hello guys,
I tried everything but it's not working and I don't know why! What do I miss? Thanks!
106 is my render container
104 is my render PVE

unprivileged: 1
lxc.cgroup2.devices.allow: c 226:128 rwm
lxc.cgroup2.devices.allow: c 29:0 rwm
lxc.mount.entry: /dev/dri dev/dri none bind,optional,create=dir
lxc.mount.entry: /dev/dri/renderD128 dev/renderD128 none bind,optional,create=file
lxc.idmap: u 0 100000 65536
lxc.idmap: g 0 100000 106
lxc.idmap: g 106 104 1
lxc.idmap: g 107 100107 65429

nano/etc/subgid :

root:100000:65536
root:104:1

ls -l /dev/dri in my container :

drwxr-xr-x 2 nobody nogroup       80 Feb 26 13:12 by-path
crw-rw-rw- 1 nobody nogroup 226,   0 Feb 26 13:12 card0
crw-rw-rw- 1 nobody render  226, 128 Feb 26 13:12 renderD128

When I put the container in privileged mode, it works.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment