Skip to content

Instantly share code, notes, and snippets.

@weshofmann
Last active August 28, 2023 10:27
  • Star 13 You must be signed in to star a gist
  • Fork 1 You must be signed in to fork a gist
Star You must be signed in to star a gist
Save weshofmann/620b924cde5dd498880e9315e48e793b to your computer and use it in GitHub Desktop.
QNAP NAS: Nvidia Hardware Transcoding in Plex and Emby Docker Containers

QNAP NAS: Nvidia Hardware Transcoding in Plex and Emby Docker Containers

Background

Last week I added an Nvidia Quadro P620 to my QNAP TVS-872XT NAS. For the last few days, I've been trying to get both Plex and Emby docker containers to actually use the Nvidia card for encoding and decoding media streams.

The instructions included in the linuxserver/plex DockerHub container indicate that to use Nvidia hardware decoding, you must first install the docker-nvidia2 runtime from Nvidia. Since QNap includes a pretty old version of docker and everything is in very non-standard locations, I spent some time figuring out a way around that using the QNap NVIDIA_GPU_DRV QPkg. What I ended up with are some docker-compose files that enable docker containers on the QNap to actually use an Nvidia GPU. They aren't pretty, and it's a bit of a hack, but it works.

Note: Currently, Plex doesn't use the nvidia decoder in a number of situations because apparently there are some quality issues. I don't care; I'm doing this so I put the encoding/decoding load on the GPU and leave the CPU free for other NAS containers and tasks. I created a handy container that wraps the transcoder binary to force it to use the nvidia decoder, so that's the Plex image I'm currently using. You can find the container here: https://github.com/weshofmann/docker-plex-nvdec

It Works!!!

Playing a 4K video in Plex and transcoding to 1080p

plex-streaming-movie-hw-decode.png

GPU and CPU usaging during playback

qnap-host-gpu-status.pngqnap-host-cpu-status.png

Plex logfile showing that Nvidia hardware is being used

plex-streaming-movie-hw-decode-logs.png

Output of nvidia-smi on the QNap host showing that the GPU is processing streams

qnap-host-nvidia-smi.png

And here's playback of the same movie using Emby

emby-streaming-movie-how-decode.png

Overview -- How This Works

So, here's the basic idea...

  1. First, we need to create a new empty directory on the host that will store some files for the new volume

  2. Then, these docker-compose files do the following:

    1. Create a docker local volume that wraps an overlay filesystem, pulling in the QNap NVIDIA_GPU_DRIVER directory tree as the bottom layer, and adding the newly created directories above as the "upper" and "work" layers. They start out empty.

    2. Start a temporary "prep" container (plex-prep) that simply copies the contents of the container's /usr filesystem into the new volume we created above. Since that new volume is an overlay filesystem, these copied files actually end up geting stored in the newly created directory.

    3. Start the actual Plex container (plex) that bind mounts the new volume over the existing /usr directory tree. This gives plex access to all of the nvidia binaries and libraries that are available on the host.

The Docker Compose Files

Instructions

  1. Open App Center on the NAS and install both of the available Nvidia GPU packages.

  2. Configure your NAS to allow Container Station to use your GPU

    1. Go to Control Panel -> System -> Hardware -> Expansion Cards

    2. Select Container Station in the "Resource Use" pull-down menu for your graphics card.

  3. Create an empty directory tree in a share on the QNap host where we can store data for a new overlay filesystem (e.g. /share/DockerData/overlay_volumes/plex).

    1. Create an upper and work subdirectory in the new directory:

      /share/DockerData/overlay_volumes/plex/upper

      /share/DockerData/overlay_volumes/plex/work

  4. Create a new container in QNap Container Station using the Create Application button. Give it a name, then paste in the contents of one of these yaml files and click Apply.

Plex

version: "3.4"

volumes:
  plex_usr:
    driver: local
    driver_opts:
      type: overlay
      device: overlay
      # Change the '/share/DockerData/volume_overlays/plex' to whatever
      # directory you'd like to use to store the temp volume overlay files
      # Note: That path appears here TWICE so change both of them!
      o: lowerdir=/share/CE_CACHEDEV1_DATA/.qpkg/NVIDIA_GPU_DRV/usr,upperdir=/share/DockerData/volume_overlays/plex/upper,workdir=/share/DockerData/volume_overlays/plex/work

services:

  plex-prep:
    image: weshofmann/plex-nvdec
    container_name: plex-prep
    <<: *global
    volumes:
      - plex_usr:/plex_usr
      - /share/CE_CACHEDEV1_DATA/.qpkg/NVIDIA_GPU_DRV/usr/:/nvidia:ro
    environment:
      PUID:      "1001"       # Change these values as necessary for your own containers
      PGID:      "100"
      UMASK_SET: "022"
      TZ:        "US/Eastern"
    entrypoint: /bin/bash -x -c "cp -Rv /usr/* /plex_usr/"
    restart: "no"       # only needs to run once

  plex:
    # linuxserver/docker-plex modified to add support for NVidia decoding + additional plugins.
    image: weshofmann/plex-nvdec
    container_name: plex
    depends_on: 
      - plex-prep
    network_mode: host
    devices:
      # - /dev/dri            # uncomment this to use intel transcoder if available
      - /dev/nvidia0
      - /dev/nvidiactl
      - /dev/nvidia-uvm
    environment:
      VERSION: docker
      NVIDIA_VISIBLE_DEVICES: all
      PUID:      "1001"       # Change these values as necessary for your own containers
      PGID:      "100"
      UMASK_SET: "022"
      TZ:        "US/Eastern"
    volumes:
      - plex_usr:/usr         # dont' modify this

      # Change the following mounts to match your locations for config, tv, movies, etc.
      - /share/Scratch:/scratch
      - /share/DockerData/plex/config:/config
      - /share/TV/Library:/tv
      - /share/Movies/Library:/movies
      - /share/Scratch/transcode/plex:/transcode
    restart: unless-stopped

Emby

version: "3.4"

  emby_usr:
    driver: local
    driver_opts:
      type: overlay
      device: overlay
      # Change the '/share/DockerData/volume_overlays/plex' to whatever
      # directory you'd like to use to store the temp volume overlay files
      # Note: That path appears here TWICE so change both of them!
      o: lowerdir=/share/CE_CACHEDEV1_DATA/.qpkg/NVIDIA_GPU_DRV/usr,upperdir=/share/DockerData/volume_overlays/emby/upper,workdir=/share/DockerData/volume_overlays/emby/work

services:

  emby-prep:
    image: linuxserver/emby
    container_name: emby-prep
    environment:
      PUID:      "1001"       # Change these values as necessary for your own containers
      PGID:      "100"
      UMASK_SET: "022"
      TZ:        "US/Eastern"
    volumes:
      - emby_usr:/emby_usr
      - /share/CE_CACHEDEV1_DATA/.qpkg/NVIDIA_GPU_DRV/usr/:/nvidia:ro
    entrypoint: /bin/bash -x -c "cp -Rv /usr/* /emby_usr/"
    restart: "no"       # only needs to run once
  
  emby:
    image: linuxserver/emby
    container_name: emby
    depends_on:
      - emby-prep
    environment:
      PUID:      "1001"
      PGID:      "100"
      UMASK_SET: "022"
      TZ:        "US/Eastern"
    devices:
      # - /dev/dri            # uncomment this to use intel transcoder if available
      - /dev/nvidia0
      - /dev/nvidiactl
      - /dev/nvidia-uvm
    volumes:
      - emby_usr:/usr         # dont' modify this

      # Change the following mounts to match your locations for config, tv, movies, etc.
      - /share/Scratch:/scratch
      - /share/DockerData/emby/config:/config
      - /share/TV/Library:/data/tvshows
      - /share/Movies/Library:/data/movies
      - /share/Scratch/transcode/emby:/transcode
    ports:
      - "8096:8096"
      - "8920:8920"

Notes and Issues

Even with the NVidia GPU transcoder active, Plex still seems to use significantly more CPU than Emby does. Conversely, GPU load is significantly higher during playback under Emby, so I would guess Emby is offloading more tasks to the GPU than Plex is. I don't know much about the internals of either product, so I have no idea why. Overall, Emby seems to be much better suited to being hosted on a machine where lower CPU usage is ideal, like a NAS. Note that I'm not at all familiar with the internals of either of these products, so I have no explanation for any of the above at this time.

Plex is a little unstable from time to time using the GPU transcoder. Sometimes, when you change the stream quality (which invokes a new transcoder process), it will just hang for a bit, the logs will say it can't initialize the GPU encoder, then it will switch to software transcoding. If you just exit the playback then start it again, it will usually work the second time. I have no idea what's going on.

Lastly, THIS IS A HACK. It's quite likely that the modified /usr directory we are mounting on the container is wonky enough to cause some of these intermittent issues. That said, it works the majority of the time (basically all the time with Emby), so it's very usable.

Disclaimer

I make no warranty about any of this; use at your own risk! It works for me, but your configuration might be fundamentally different enough that it doesn't. It might even cause your computer, your house, your neighborhood, or your city to simply explode. It might summon a demon from another plane. But hopefully it will just work. Your mileage may vary.

version: "3.4"
volumes:
emby_usr:
driver: local
driver_opts:
type: overlay
device: overlay
# Change the '/share/DockerData/volume_overlays/plex' to whatever
# directory you'd like to use to store the temp volume overlay files
# Note: That path appears here TWICE so change both of them!
o: lowerdir=/share/CE_CACHEDEV1_DATA/.qpkg/NVIDIA_GPU_DRV/usr,upperdir=/share/DockerData/volume_overlays/emby/upper,workdir=/share/DockerData/volume_overlays/emby/work
services:
emby-prep:
image: linuxserver/emby
container_name: emby-prep
environment:
PUID: "1001" # Change these values as necessary for your own containers
PGID: "100"
UMASK_SET: "022"
TZ: "US/Eastern"
volumes:
- emby_usr:/emby_usr
- /share/CE_CACHEDEV1_DATA/.qpkg/NVIDIA_GPU_DRV/usr/:/nvidia:ro
entrypoint: /bin/bash -x -c "cp -Rv /usr/* /emby_usr/"
restart: "no" # only needs to run once
emby:
image: linuxserver/emby
container_name: emby
depends_on:
- emby-prep
environment:
PUID: "1001"
PGID: "100"
UMASK_SET: "022"
TZ: "US/Eastern"
devices:
# - /dev/dri # uncomment this to use intel transcoder if available
- /dev/nvidia0
- /dev/nvidiactl
- /dev/nvidia-uvm
volumes:
- emby_usr:/usr # dont' modify this
# Change the following mounts to match your locations for config, tv, movies, etc.
- /share/Scratch:/scratch
- /share/DockerData/emby/config:/config
- /share/TV/Library:/data/tvshows
- /share/Movies/Library:/data/movies
- /share/Scratch/transcode/emby:/transcode
ports:
- "8096:8096"
- "8920:8920"
restart: unless-stopped
version: "3.4"
volumes:
plex_usr:
driver: local
driver_opts:
type: overlay
device: overlay
# Change the '/share/DockerData/volume_overlays/plex' to whatever
# directory you'd like to use to store the temp volume overlay files
# Note: That path appears here TWICE so change both of them!
o: lowerdir=/share/CE_CACHEDEV1_DATA/.qpkg/NVIDIA_GPU_DRV/usr,upperdir=/share/DockerData/volume_overlays/plex/upper,workdir=/share/DockerData/volume_overlays/plex/work
services:
plex-prep:
image: weshofmann/plex-nvdec
container_name: plex-prep
volumes:
- plex_usr:/plex_usr
- /share/CE_CACHEDEV1_DATA/.qpkg/NVIDIA_GPU_DRV/usr/:/nvidia:ro
environment:
PUID: "1001" # Change these values as necessary for your own containers
PGID: "100"
UMASK_SET: "022"
TZ: "US/Eastern"
entrypoint: /bin/bash -x -c "cp -Rv /usr/* /plex_usr/"
restart: "no" # only needs to run once
plex:
# linuxserver/docker-plex modified to add support for NVidia decoding + additional plugins.
image: weshofmann/plex-nvdec
container_name: plex
depends_on:
- plex-prep
network_mode: host
devices:
# - /dev/dri # uncomment this to use intel transcoder if available
- /dev/nvidia0
- /dev/nvidiactl
- /dev/nvidia-uvm
environment:
VERSION: docker
NVIDIA_VISIBLE_DEVICES: all
PUID: "1001" # Change these values as necessary for your own containers
PGID: "100"
UMASK_SET: "022"
TZ: "US/Eastern"
volumes:
- plex_usr:/usr # dont' modify this
# Change the following mounts to match your locations for config, tv, movies, etc.
- /share/Scratch:/scratch
- /share/DockerData/plex/config:/config
- /share/TV/Library:/tv
- /share/Movies/Library:/movies
- /share/Scratch/transcode/plex:/transcode
restart: unless-stopped
@nader-eloshaiker
Copy link

Just a small note, in the instructions, you have step 3 as:

Create an empty directory tree in a share on the QNap host where we can store data for a new overlay filesystem (e.g. /share/DockerData/overlay_volumes/plex).

But in the plex.yaml you have:

      # Change the '/share/DockerData/volume_overlays/plex' to whatever
      # directory you'd like to use to store the temp volume overlay files
      # Note: That path appears here TWICE so change both of them!
      o: lowerdir=/share/CE_CACHEDEV1_DATA/.qpkg/NVIDIA_GPU_DRV/usr,upperdir=/share/DockerData/volume_overlays/plex/upper,workdir=/share/DockerData/volume_overlays/plex/work

So you have overlay_volumes and volume_overlays

@nader-eloshaiker
Copy link

In the section for Docker Compose Files in Plex, you have:

services:

  plex-prep:
    image: weshofmann/plex-nvdec
    container_name: plex-prep
    <<: *global

I don't think you meant to have <<: *global in there.

@Kreep68
Copy link

Kreep68 commented May 21, 2020

Hi, really interrested for this docker creation (I'm really noob) but unfortunately I tryed and the YAML test never succed in container station. ->Issue with the *Global (line 19).
What is possible to change for make it working ?

@Kreep68
Copy link

Kreep68 commented May 21, 2020

how do we update to the latest version of the plexserver ? (so sorry for all these question but I'm discovering container"

@disenter
Copy link

disenter commented May 6, 2021

I'm probably a few steps behind you and I'm also not using dockers, but I have a TVS-872XT, and while I think i've been able to get Plex to natively use my P2000 for transcoding purposes, I'm unable to monitor it directly with nvidia-smi. I can just see it's activity in qnaps hardware applet, and note that the cpu is doing very little while transcoding.

What did you do to get nvidia-smi to output properly on your Qnap?

I first tried as above, with the export PATH=\ and just pointed it at the directory containing nvidia-smi.
I still couldn't call it from anywhere, but from trying to use it in a directory where it lives, like /share/CACHEDEV1_DATA/.qpkg/NVIDIA_GPU_DRV/usr/bin it gives the error:

NVIDIA-SMI couldn't find libnvidia-ml.so library in your system. Please make sure that the NVIDIA Display Driver is properly installed and present in your system.
Please also try adding directory that contains libnvidia-ml.so to your system PATH.

using find . -iname libnvidia-ml.so I'm presented with a few options

/share/CACHEDEV1_DATA/.qpkg/NVIDIA_GPU_DRV/cuda-10.2/targets/x86_64-linux/lib/stubs/libnvidia-ml.so
/share/CACHEDEV1_DATA/.qpkg/NVIDIA_GPU_DRV/usr/lib/libnvidia-ml.so
/share/CACHEDEV1_DATA/.qpkg/NVIDIA_GPU_DRV/usr/nvidia.u18.04/libnvidia-ml.so
/share/CACHEDEV1_DATA/.qpkg/NVIDIA_GPU_DRV/usr/nvidia/libnvidia-ml.so

adding any of these directories to PATH didn't seem to make any difference, although echoing it showed it had added the directories OK.

I then came across a post about using ~/.bashrc instead of the default ~/.profile and incorporated that, adding the needed paths to ~/.bashrc and then changed my ~/.profile as you have but still wont work.

The same error exists, and I can't attempt to use it anywhere except within the directories it is found.

What am i doing wrong?

I really need this to work properly not so much for transcoding, but to make use of my P2000 to encode down my library to reclaim space that I'm fast running out of with Tdarr. But the first step to that is getting the nvidia drivers and tools to behave as they should, since I can't seem to make Tdarr use them so far either, and its making it difficult to sort out since i can't directly monitor it with nvidia-smi.

Any ideas?

@It0w
Copy link

It0w commented Feb 28, 2023

Hello, I want to thank you for your guide.

You helped me a lot to get the transcoding working properly with Jellyfin.

Thank you!

TS-473a
T-1000

version: '3.4'

volumes:
    jellyfin_usr:
        driver: local
        driver_opts:
            type: overlay
            device: overlay
            o: lowerdir=/share/ZFS530_DATA/.qpkg/NVIDIA_GPU_DRV/usr,upperdir=/share/Container/docker-jellyfin/overlay/upper,workdir=/share/Container/docker-jellyfin/overlay/work

services:

    jellyfin-prep:
        image: jellyfin/jellyfin
        container_name: jellyfin-prep
        environment:
            PUID:       "1000"
            PGID:       "100"
            UMASK_SET:  "022"
            TZ:         "Europe/Berlin"
        volumes:
            - jellyfin_usr:/jellyfin_usr
            - /share/ZFS530_DATA/.qpkg/NVIDIA_GPU_DRV/usr/:/nvidia:ro
        entrypoint: /bin/bash -x -c "cp -Rv /usr/* /jellyfin_usr/"
        restart: "no"
        
    jellyfin:
        image: jellyfin/jellyfin
        container_name: jellyfin
        depends_on:
            - jellyfin-prep
        environment:
            PUID:       "1000"
            PGID:       "100"
            UMASK_SET:  "022"
            TZ:         "Europe/Berlin"
        devices:
            - /dev/nvidia0
            - /dev/nvidiactl
            - /dev/nvidia-uvm
        network_mode: 'host'
        volumes:
            - jellyfin_usr:/usr
            - /etc/localtime:/etc/localtime:ro
            - /share/Container/docker-jellyfin/config:/config
            - /share/Download-Knecht/mediathek:/media
            - /share/Container/docker-jellyfin/cache:/cache
        restart: unless-stopped

@grossmaul
Copy link

NVIDIA-SMI couldn't find libnvidia-ml.so library in your system. Please make sure that the NVIDIA Display Driver is properly installed and present in your system.
Please also try adding directory that contains libnvidia-ml.so to your system PATH.

I'm having this error too (QNAP TS-673A). Editing the path also does not help. Did anyone find a solution yet?

@grossmaul
Copy link

I found the solution! The problem was the dynamic linker.

I've posted my instructions here:
jellyfin/jellyfin#9806 (comment)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment