Running Plex in Docker with Hardware Transcoding

Prepare everything for sucessfull plex deployment with hardware transcoding

Share on:  
                 

This Post describes how to set up Plex in a Docker Container while using the hw transcoding functionality from my Quadro GPU and the nVidia Toolkit.

Install Ubuntu server from scratch

My Hardware is the following setup… TL;DR: Lenovo P330 Tiny with 8G RAM + i5-8500, Quadro P400 and 256GB Samsung SSD.

 

root@pve2:/home/numark1# lshw -short
H/W path         Device          Class          Description
===========================================================
                                 system         30CES0B600 (LENOVO_MT_30CE_BU_Think_FM_ThinkStation P330 Tiny)
/0                               bus            3135
/0/0                             memory         64KiB BIOS
/0/3b                            memory         8GiB System Memory
/0/3b/0                          memory         8GiB SODIMM DDR4 Synchronous 2667 MHz (0.4 ns)
/0/3b/1                          memory         [empty]
/0/45                            memory         384KiB L1 cache
/0/46                            memory         1536KiB L2 cache
/0/47                            memory         9MiB L3 cache
/0/48                            processor      Intel(R) Core(TM) i5-8500 CPU @ 3.00GHz
/0/100                           bridge         8th Gen Core Processor Host Bridge/DRAM Registers
/0/100/1                         bridge         6th-10th Gen Core Processor PCIe Controller (x16)
/0/100/1/0                       display        GP107GL [Quadro P400]
/0/100/1/0.1     card0           multimedia     GP107GL High Definition Audio Controller
/0/100/1/0.1/0   input19         input          HDA NVidia HDMI/DP,pcm=3
/0/100/1/0.1/1   input20         input          HDA NVidia HDMI/DP,pcm=7
/0/100/1/0.1/2   input21         input          HDA NVidia HDMI/DP,pcm=8
/0/100/1/0.1/3   input22         input          HDA NVidia HDMI/DP,pcm=9
/0/100/1/0.1/4   input23         input          HDA NVidia HDMI/DP,pcm=10
/0/100/1/0.1/5   input24         input          HDA NVidia HDMI/DP,pcm=11
/0/100/8                         generic        Xeon E3-1200 v5/v6 / E3-1500 v5 / 6th/7th/8th Gen Core Processor Gaussian Mixture Model
/0/100/14                        bus            Cannon Lake PCH USB 3.1 xHCI Host Controller
/0/100/14/0      usb1            bus            xHCI Host Controller
/0/100/14/1      usb2            bus            xHCI Host Controller
/0/100/14.2                      memory         RAM memory
/0/100/16                        communication  Cannon Lake PCH HECI Controller
/0/100/16.3                      communication  Cannon Lake PCH Active Management Technology - SOL
/0/100/17                        storage        Cannon Lake PCH SATA AHCI Controller
/0/100/1b                        bridge         Cannon Lake PCH PCI Express Root Port #21
/0/100/1b/0      /dev/nvme0      storage        SAMSUNG MZVLB256HBHQ-000L7
/0/100/1b/0/0    hwmon1          disk           NVMe disk
/0/100/1b/0/2    /dev/ng0n1      disk           NVMe disk
/0/100/1b/0/1    /dev/nvme0n1    disk           256GB NVMe disk
/0/100/1b/0/1/1                  volume         1074MiB Windows FAT volume
/0/100/1b/0/1/2  /dev/nvme0n1p2  volume         237GiB EXT4 volume
/0/100/1d                        bridge         Cannon Lake PCH PCI Express Root Port #9
/0/100/1f                        bridge         Q370 Chipset LPC/eSPI Controller
/0/100/1f/0                      system         PnP device PNP0c02
/0/100/1f/1                      system         PnP device PNP0c02
/0/100/1f/2                      generic        PnP device INT3f0d
/0/100/1f/3                      system         PnP device PNP0c02
/0/100/1f/4                      system         PnP device PNP0c02
/0/100/1f/5                      system         PnP device PNP0c02
/0/100/1f/6                      system         PnP device PNP0c02
/0/100/1f.4                      bus            Cannon Lake PCH SMBus Controller
/0/100/1f.5                      bus            Cannon Lake PCH SPI Controller
/0/100/1f.6      eno1            network        Ethernet Connection (7) I219-LM
/1                               power          To Be Filled By O.E.M.
/2               /dev/fb0        display        EFI VGA
/3               input0          input          Sleep Button
/4               input1          input          Power Button
/5               input2          input          Power Button

I installed Ubuntu 22.04.1 LTS via USB Stick on it and didnt enabled the installation of proposed Nvidia-515 driver, no LVM and enable SSH .

 
root@pve2:/home/numark1# apt list --installed | grep nvidia

WARNING: apt does not have a stable CLI interface. Use with caution in scripts.

libnvidia-cfg1-515-server/jammy-updates,jammy-security,now 515.65.01-0ubuntu0.22.04.1 amd64 [installed,automatic]
libnvidia-compute-515-server/jammy-updates,jammy-security,now 515.65.01-0ubuntu0.22.04.1 amd64 [installed,automatic]
linux-modules-nvidia-515-server-5.15.0-46-generic/jammy-updates,jammy-security,now 5.15.0-46.49 amd64 [installed,automatic]
linux-modules-nvidia-515-server-generic/jammy-updates,jammy-security,now 5.15.0-46.49 amd64 [installed]
linux-objects-nvidia-515-server-5.15.0-46-generic/jammy-updates,jammy-security,now 5.15.0-46.49 amd64 [installed,automatic]
linux-signatures-nvidia-5.15.0-46-generic/jammy-updates,jammy-security,now 5.15.0-46.49 amd64 [installed,automatic]
nvidia-compute-utils-515-server/jammy-updates,jammy-security,now 515.65.01-0ubuntu0.22.04.1 amd64 [installed,automatic]
nvidia-headless-no-dkms-515-server/jammy-updates,jammy-security,now 515.65.01-0ubuntu0.22.04.1 amd64 [installed]
nvidia-kernel-common-515-server/jammy-updates,jammy-security,now 515.65.01-0ubuntu0.22.04.1 amd64 [installed,automatic]
nvidia-kernel-source-515-server/jammy-updates,jammy-security,now 515.65.01-0ubuntu0.22.04.1 amd64 [installed,automatic]
nvidia-utils-515-server/jammy-updates,jammy-security,now 515.65.01-0ubuntu0.22.04.1 amd64 [installed]
 
root@pve2:/home/numark1# lsb_release -d
Description:    Ubuntu 22.04.1 LTS

I also installed the nvidia-utils to have nvidia-smi installed to verify the GPU is working.

 
root@pve2:/home/numark1# nvidia-smi
Fri Aug 12 12:21:17 2022
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 515.65.01    Driver Version: 515.65.01    CUDA Version: 11.7     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|                               |                      |               MIG M. |
|===============================+======================+======================|
|   0  Quadro P400         Off  | 00000000:01:00.0 Off |                  N/A |
| 54%   51C    P0    N/A /  N/A |      0MiB /  2048MiB |      2%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+

+-----------------------------------------------------------------------------+
| Processes:                                                                  |
|  GPU   GI   CI        PID   Type   Process name                  GPU Memory |
|        ID   ID                                                   Usage      |
|=============================================================================|
|  No running processes found                                                 |
+-----------------------------------------------------------------------------+

Install docker & docker-compose

I use the installation guide from docker itself, you can find it here: Docker Engine installation

The packages for ca-certificates, curl, gnupg and lsb-release are already up2date and we can proceed with adding the GPG Key to the keyring followed by a apt update. In the output of your update you can see the addded repository from docker.

 
root@pve2:/home/numark1# apt update
Get:1 https://download.docker.com/linux/ubuntu jammy InRelease [48.9 kB]
Hit:2 http://de.archive.ubuntu.com/ubuntu jammy InRelease
Get:3 http://de.archive.ubuntu.com/ubuntu jammy-updates InRelease [114 kB]
Get:4 https://download.docker.com/linux/ubuntu jammy/stable amd64 Packages [6,255 B]
Get:5 http://de.archive.ubuntu.com/ubuntu jammy-backports InRelease [99.8 kB]
Get:6 http://de.archive.ubuntu.com/ubuntu jammy-security InRelease [110 kB]
Fetched 379 kB in 1s (381 kB/s)
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
2 packages can be upgraded. Run 'apt list --upgradable' to see them.

After that we can start the installation of the engine.

 
apt install docker-ce docker-ce-cli containerd.io

The last step is to install docker-compose, this is described here Today the current version from releases is 2.15.1

 
root@pve2:/home/numark1# curl -L "https://github.com/docker/compose/releases/download/v2.15.1/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose

root@pve2:/home/numark1# chmod +x /usr/local/bin/docker-compose

Now we are ready and have the docker engine and docker-compose in place and can verify it via:

 
root@pve2:/home/numark1# docker -v
Docker version 20.10.17, build 100c701
root@pve2:/home/numark1# docker-compose -v
Docker Compose version v2.9.0

Install portainer

The first thing on all of my docker hosts is to install portainer as the management software, i’ll do this via docker-compose and set up everything else as a portainer stack. So create a compose.yml and start it via:

 
version: "3.9"

services:
 portainer:
  image: portainer/portainer-ce:latest
  restart: always
  container_name: portainer
  ports:
   - "9443:9443"
  volumes:
   - "/var/run/docker.sock:/var/run/docker.sock:ro"
   - "/etc/localtime:/etc/localtime:ro"
   - "$PWD/data_portainer:/data"

Start the container and wait for pulling the image, after that we can access the portainer portal and set up admin account.

 
root@pve2:/home/numark1# docker-compose up -d
[+] Running 5/5
 ⠿ portainer Pulled                                    13.3s                                                                                            
772227786281 Pull complete                        1.0s                                                                                             
   ⠿ 96fd13befc87 Pull complete                        1.1s                                                                                             
   ⠿ 4847ec395191 Pull complete                        3.3s                                                                                             
   ⠿ 4c2d012c4350 Pull complete                        3.4s                                                                                             
[+] Running 2/2
 ⠿ Network numark1_default  Created                    0.1s                                                                                           
 ⠿ Container portainer      Started                    0.7s                                                                                         
alt text

Side quest is to create a watchtower stack to keep all your images up2date.

version: "3.9"

services:
 watchtower:
  image: containrrr/watchtower:latest
  restart: always
  container_name: watchtower
  command: --cleanup
  volumes:
   - /var/run/docker.sock:/var/run/docker.sock:ro
   - /etc/localtime:/etc/localtime:ro

Install nvidia-toolkit

As described here we need to install the nvidia-container toolkit for proper hardware acceleration inside the container. You will find the documentation for the nvidia-container here and the installation instructions here.

We need to add the keys and the sources again, followed by a apt update and then the installation of the nvidia-docker2 package.

 
distribution=$(. /etc/os-release;echo $ID$VERSION_ID) \
      && curl -fsSL https://nvidia.github.io/libnvidia-container/gpgkey | sudo gpg --dearmor -o /usr/share/keyrings/nvidia-container-toolkit-keyring.gpg \
      && curl -s -L https://nvidia.github.io/libnvidia-container/$distribution/libnvidia-container.list | \
            sed 's#deb https://#deb [signed-by=/usr/share/keyrings/nvidia-container-toolkit-keyring.gpg] https://#g' | \
            sudo tee /etc/apt/sources.list.d/nvidia-container-toolkit.list

Then install the nvidia-docker2 package and restart docker service to finish installation.

 
apt install -y nvidia-docker2
systemctl restart docker

To verify successfull installation you can run a basic container displaying the nvidia-smi stats.

 
root@pve2:/home/numark1# docker run --rm --gpus all nvidia/cuda:11.0.3-base-ubuntu20.04 nvidia-smi
Unable to find image 'nvidia/cuda:11.0.3-base-ubuntu20.04' locally
11.0.3-base-ubuntu20.04: Pulling from nvidia/cuda
d7bfe07ed847: Pull complete
75eccf561042: Pull complete
191419884744: Pull complete
a17a942db7e1: Pull complete
16156c70987f: Pull complete
Digest: sha256:57455121f3393b7ed9e5a0bc2b046f57ee7187ea9ec562a7d17bf8c97174040d
Status: Downloaded newer image for nvidia/cuda:11.0.3-base-ubuntu20.04
Fri Aug 12 13:06:07 2022
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 515.65.01    Driver Version: 515.65.01    CUDA Version: 11.7     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|                               |                      |               MIG M. |
|===============================+======================+======================|
|   0  Quadro P400         Off  | 00000000:01:00.0 Off |                  N/A |
| 47%   49C    P0    N/A /  N/A |      0MiB /  2048MiB |      0%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+

+-----------------------------------------------------------------------------+
| Processes:                                                                  |
|  GPU   GI   CI        PID   Type   Process name                  GPU Memory |
|        ID   ID                                                   Usage      |
|=============================================================================|
|  No running processes found                                                 |
+-----------------------------------------------------------------------------+

Start plex container from linuxserver.io and add hw transcoding capabilities

We are adding now a new stack with the configuration for our plex setup. Before that I created a skeleton I always use for docker and which is looks like the following.

  • /home//docker/docker-compose.yml
  • /home//docker/data_/<data_files>

That mean for my plex setup, that I created 2 folders inside “data_plex” with config, movies and series. All 3 will be mounted inside the container and 2 of them (movies and series) are also the mount points for my NAS to provide the data.

version: "3.9"
services:
  plex:
    image: lscr.io/linuxserver/plex:latest
    container_name: plex
    network_mode: host
    runtime: nvidia
    environment:
      - PUID=1000
      - PGID=1000
      - VERSION=docker
      - PLEX_CLAIM=claim-0000000000000
      - NVIDIA_VISIBLE_DEVICES=all
      - NVIDIA_DRIVER_CAPABILITIES=all
    volumes:
      - /home/numark1/docker/data_plex/config:/config
      - /home/numark1/docker/data_plex/tv:/tv
      - /home/numark1/docker/data_plex/movies:/movies
    restart: unless-stopped

Important here is that you change the runtime to nvidia and add a environment variable to make the GPU visible inside the container. The claim is a helper to assign the newly installed server to your plex account. You can generate a claim here and keep in mind that this claim is valid for 4 minutes.

You dont need to explicitly mention the direct render interface /dev/dri and mount it into the container because the nvidia runtime already exposed the device.

If you want to mount your NAS path to your machine, keep in mind to install the cifs-utils before to mount the share.

After deploying the stack you can go to your browser and open the admin URL http://plex.myfqdn.de:32400 and finish the setup.

To finally verify that hw transcoding is running, you can start a movie or series and switch quality to a lower one (for example 1080p) and then have a look at the dashboard. This should show you Transcode (hw), if there is no (hw) it will transcode the stream on the CPU and you will probably see a huge CPU load.

alt text

Lessons learned

There are 2 really important things to mention, first one is the initrd size with a potential boot loop and the second is the version of the nvidia driver.

There is a known issue with installing nvidia-driver-515 on 22.04 ending up in a boot loop because of “out of memory” error in grub. I solved this issue with modifying the /etc/initramfs-tools/initramfs.conf and set up module dependencies to “dep” and compressing to “bzip2”.

 
# initramfs.conf
# Configuration file for mkinitramfs(8). See initramfs.conf(5).
#
# Note that configuration options from this file can be overridden
# by config files in the /etc/initramfs-tools/conf.d directory.

#
# MODULES: [ most | netboot | dep | list ]
#
# most - Add most filesystem and all harddrive drivers.
# dep - Try and guess which modules to load.
# netboot - Add the base modules, network modules, but skip block devices.
# list - Only include modules from the 'additional modules' list
#
MODULES=dep
#
# BUSYBOX: [ y | n | auto ]
#
# Use busybox shell and utilities.  If set to n, klibc utilities will be used.
# If set to auto (or unset), busybox will be used if installed and klibc will
# be used otherwise.
#
BUSYBOX=auto
#
# COMPRESS: [ gzip | bzip2 | lz4 | lzma | lzop | xz | zstd ]
#
COMPRESS=bzip2
#
# DEVICE: ...
#
# Specify a specific network interface, like eth0
# Overridden by optional ip= or BOOTIF= bootarg
#
DEVICE=
#
# NFSROOT: [ auto | HOST:MOUNT ]
#
NFSROOT=auto
#
# RUNSIZE: ...
#
# The size of the /run tmpfs mount point, like 256M or 10%
# Overridden by optional initramfs.runsize= bootarg
#
RUNSIZE=10%
#
# FSTYPE: ...
#
# The filesystem type(s) to support, or "auto" to use the current root

The second thing is the version of the nvidia driver. Only the “nvidia-driver-515” is exposing all codecs into the container, if you choose to install nvidia-drver-515-server which is proposed in the ubuntu setup or later, you cannot use hw transcoding due missing codecs. You also see this reported in the plex logs when the server try to elaborate which codecs are supported by hardware.

 
Aug 13, 2022 10:53:28.154 [0x7f9360470b38] DEBUG - [Req#13c/Transcode] Codecs: testing h264_nvenc (encoder)
Aug 13, 2022 10:53:28.154 [0x7f9360470b38] DEBUG - [Req#13c/Transcode] Codecs: hardware transcoding: testing API nvenc
Aug 13, 2022 10:53:28.456 [0x7f9360470b38] ERROR - [Req#13c/Transcode] [FFMPEG] - The minimum required Nvidia driver for nvenc is 418.30 or newer
Aug 13, 2022 10:53:28.456 [0x7f9360470b38] WARN - [Req#13c/Transcode] Codecs: avcodec_open2 returned -1 for encoder 'h264_nvenc'
Aug 13, 2022 10:53:28.540 [0x7f9360470b38] DEBUG - [Req#13c/Transcode] Codecs: testing h264_nvenc (encoder)
Aug 13, 2022 10:53:28.540 [0x7f9360470b38] DEBUG - [Req#13c/Transcode] Codecs: hardware transcoding: testing API nvenc
Aug 13, 2022 10:53:28.581 [0x7f9360470b38] ERROR - [Req#13c/Transcode] [FFMPEG] - The minimum required Nvidia driver for nvenc is 418.30 or newer
Aug 13, 2022 10:53:28.581 [0x7f9360470b38] WARN - [Req#13c/Transcode] Codecs: avcodec_open2 returned -1 for encoder 'h264_nvenc'
Aug 13, 2022 10:53:48.550 [0x7f9366e25b38] DEBUG - [Req#230/Transcode] Codecs: testing h264_nvenc (encoder)
Aug 13, 2022 10:53:48.550 [0x7f9366e25b38] DEBUG - [Req#230/Transcode] Codecs: hardware transcoding: testing API nvenc
Aug 13, 2022 10:53:48.623 [0x7f9366e25b38] ERROR - [Req#230/Transcode] [FFMPEG] - The minimum required Nvidia driver for nvenc is 418.30 or newer
Aug 13, 2022 10:53:48.623 [0x7f9366e25b38] WARN - [Req#230/Transcode] Codecs: avcodec_open2 returned -1 for encoder 'h264_nvenc'
Aug 13, 2022 10:53:48.754 [0x7f9366e25b38] DEBUG - [Req#230/Transcode] Codecs: testing h264_nvenc (encoder)
Aug 13, 2022 10:53:48.754 [0x7f9366e25b38] DEBUG - [Req#230/Transcode] Codecs: hardware transcoding: testing API nvenc
Aug 13, 2022 10:53:48.803 [0x7f9366e25b38] ERROR - [Req#230/Transcode] [FFMPEG] - The minimum required Nvidia driver for nvenc is 418.30 or newer
Aug 13, 2022 10:53:48.803 [0x7f9366e25b38] WARN - [Req#230/Transcode] Codecs: avcodec_open2 returned -1 for encoder 'h264_nvenc'
Aug 13, 2022 10:53:48.897 [0x7f9366e25b38] DEBUG - [Req#230/Transcode] Codecs: testing h264_nvenc (encoder)
Aug 13, 2022 10:53:48.897 [0x7f9366e25b38] DEBUG - [Req#230/Transcode] Codecs: hardware transcoding: testing API nvenc
Aug 13, 2022 10:53:48.959 [0x7f9366e25b38] ERROR - [Req#230/Transcode] [FFMPEG] - The minimum required Nvidia driver for nvenc is 418.30 or newer
Aug 13, 2022 10:53:48.959 [0x7f9366e25b38] WARN - [Req#230/Transcode] Codecs: avcodec_open2 returned -1 for encoder 'h264_nvenc'
Aug 13, 2022 10:53:49.068 [0x7f9366e25b38] DEBUG - [Req#230/Transcode] Codecs: testing h264_nvenc (encoder)
Aug 13, 2022 10:53:49.069 [0x7f9366e25b38] DEBUG - [Req#230/Transcode] Codecs: hardware transcoding: testing API nvenc
Aug 13, 2022 10:53:49.135 [0x7f9366e25b38] ERROR - [Req#230/Transcode] [FFMPEG] - The minimum required Nvidia driver for nvenc is 418.30 or newer
Aug 13, 2022 10:53:49.135 [0x7f9366e25b38] WARN - [Req#230/Transcode] Codecs: avcodec_open2 returned -1 for encoder 'h264_nvenc'
Aug 13, 2022 10:53:49.254 [0x7f9366e25b38] DEBUG - [Req#230/Transcode] Codecs: testing h264_nvenc (encoder)
Aug 13, 2022 10:53:49.254 [0x7f9366e25b38] DEBUG - [Req#230/Transcode] Codecs: hardware transcoding: testing API nvenc
Aug 13, 2022 10:53:49.300 [0x7f9366e25b38] ERROR - [Req#230/Transcode] [FFMPEG] - The minimum required Nvidia driver for nvenc is 418.30 or newer
Aug 13, 2022 10:53:49.300 [0x7f9366e25b38] WARN - [Req#230/Transcode] Codecs: avcodec_open2 returned -1 for encoder 'h264_nvenc'
Aug 13, 2022 10:53:49.396 [0x7f9366e25b38] DEBUG - [Req#230/Transcode] Codecs: testing h264_nvenc (encoder)
Aug 13, 2022 10:53:49.396 [0x7f9366e25b38] DEBUG - [Req#230/Transcode] Codecs: hardware transcoding: testing API nvenc
Aug 13, 2022 10:53:49.462 [0x7f9366e25b38] ERROR - [Req#230/Transcode] [FFMPEG] - The minimum required Nvidia driver for nvenc is 418.30 or newer
Aug 13, 2022 10:53:49.462 [0x7f9366e25b38] WARN - [Req#230/Transcode] Codecs: avcodec_open2 returned -1 for encoder 'h264_nvenc'
comments powered by Disqus