Guides HOWTO: Enable NVIDIA GPU Support in PLEX/EMBY

justinglock40

Respected Member
Original poster
Local time
12:09 AM
May 27, 2018
66
13
Adding Nvidia GPU Support to Plex Docker.

First, on the system host. Install the most recent Nvidia Driver or you preferred version.
https://www.nvidia.com/en-us/drivers/unix/
***If you have a Card such as a 9XX/10XX/20XX/16XX Series, please refer to these instructions to get the card unlock patch install and setup. Make sure you install the corresponding patch with the drive you installed above.

https://github.com/keylase/nvidia-patch
After that is done you must prepare your system for passthrough to Plex/Emby.



Recommend following his steps for installing the driver and patch so you don’t get overly confused.

# Add the package repositories


curl -s -L https://nvidia.github.io/nvidia-docker/gpgkey | sudo apt-key add -
distribution=$(. /etc/os-release;echo $ID$VERSION_ID)
curl -s -L https://nvidia.github.io/nvidia-docker/$distribution/nvidia-docker.list | sudo tee /etc/apt/sources.list.d/nvidia-docker.list
sudo apt-get update

# Install nvidia-docker2 and reload the Docker daemon configuration
sudo apt-get install -y nvidia-docker2

YOU MAY GET THIS MESSAGE :

Configuration file '/etc/docker/daemon.json'
==> File on system created by you or by a script.
==> File also in package provided by package maintainer.
What would you like to do about it ? Your options are:
Y or I : install the package maintainer's version
N or O : keep your currently-installed version
D : show the differences between the versions
Z : start a shell to examine the situation
The default action is to keep your current version.
*** daemon.json (Y/I/N/O/D/Z) [default=N] ?
*****CHOOSE YES FOR THE NVIDIA RUNTIME TO BE AVAILABLE******

# Restart docker
sudo pkill -SIGHUP dockerd

# Test nvidia-smi with the latest official CUDA image
docker run --runtime=nvidia --rm nvidia/cuda nvidia-smi


After this is complete, you will need to add the following environment variables to your PLEX/EMBY install:

Goto Portainer and click “Duplicate/Edit” on the respective container, then go to the ENV tab and add the information below:
NVIDIA_VISIBLE_DEVICES=all
NVIDIA_DRIVER_CAPABILITIES=compute,video,utility

Lastly go to the runtime & resources area and change to “nvidia”, and redeploy the container.

Any container added after this install with have exposure to nvidia gpu automatically. IF you need the container to actually access the GPU then you must add those environmental variables. I would do a pull request to have them added to the yaml, but I don't know how to.
 
  • Like
Reactions: timekills and Sret

justinglock40

Respected Member
Original poster
Local time
12:09 AM
May 27, 2018
66
13
You shouldn't I put in a pull requests to add the necessary lines when Plex is reloaded and installed not sure if it's been merged or not
 

bubbadk

Legendary Member
Staff
Local time
7:09 AM
Mar 18, 2018
318
77
Denmark
i dont know what the problem is. have you done anything now :)
Post automatically merged:

still get this error
 

Attachments

Last edited:

justinglock40

Respected Member
Original poster
Local time
12:09 AM
May 27, 2018
66
13
Not sure what’s going on with your install maybe reinstall Nvidia drivers with the .run file and then try the steps above
 

shuozou

Junior Member
Local time
11:39 AM
Nov 28, 2019
11
1
i tried every step but it stilld doesnt work reinstalling driver now
 

Baraka

Noob
Local time
12:09 AM
Sep 11, 2018
1
0
Had the same issue i managed to fix it

The issue is that `nvidia-smi` never runs because the proper drivers via cuda isn't installed.

Code:
"NVIDIA-SMI has failed because it couldn't communicate with the NVIDIA driver. Make sure that the latest NVIDIA driver is installed and running."
So you must install the toolkit at nvidia

Choose your distro and run the commands provided

Code:
wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1604/x86_64/cuda-ubuntu1604.pin

sudo mv cuda-ubuntu1604.pin /etc/apt/preferences.d/cuda-repository-pin-600

wget http://developer.download.nvidia.com/compute/cuda/10.2/Prod/local_installers/cuda-repo-ubuntu1604-10-2-local-10.2.89-440.33.01_1.0-1_amd64.deb

sudo dpkg -i cuda-repo-ubuntu1604-10-2-local-10.2.89-440.33.01_1.0-1_amd64.deb

sudo apt-key add /var/cuda-repo-10-2-local-10.2.89-440.33.01/7fa2af80.pub

sudo apt-get update

sudo apt-get -y install cuda

sudo reboot
Insure you reboot your box

PLEASE KEEP IN MIND that doing this will install xserver removing it with

Code:
sudo apt-get remove xserver-xorg-core
will remove cuda and break this back to not working as its a requirement for cuda from what it looks like

xserver is a just a remote gui desktop
like so

1575332916755.png

but no session will start no matter what.
* please don't ask for a fix as i won't provide a fix for that*

Code:
nvidia-smi
you should get an output of

Code:
[email protected]:~$ nvidia-smi
Mon Dec  2 18:27:08 2019
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 440.33.01    Driver Version: 440.33.01    CUDA Version: 10.2     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|===============================+======================+======================|
|   0  GeForce GTX 750 Ti  Off  | 00000000:0B:00.0 Off |                  N/A |
| 40%   33C    P8     1W /  38W |      0MiB /  2002MiB |      0%      Default |
+-------------------------------+----------------------+----------------------+

+-----------------------------------------------------------------------------+
| Processes:                                                       GPU Memory |
|  GPU       PID   Type   Process name                             Usage      |
|=============================================================================|
|  No running processes found                                                 |
+-----------------------------------------------------------------------------+
[email protected]:~$
and ensure the docker test works


From there run `sudo docker run --runtime=nvidia --rm nvidia/cuda nvidia-smi`

Code:
[email protected]:~$ sudo docker run --runtime=nvidia --rm nvidia/cuda nvidia-smi
Tue Dec  3 00:32:22 2019
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 440.33.01    Driver Version: 440.33.01    CUDA Version: 10.2     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|===============================+======================+======================|
|   0  GeForce GTX 750 Ti  Off  | 00000000:0B:00.0 Off |                  N/A |
| 40%   33C    P8     1W /  38W |      0MiB /  2002MiB |      1%      Default |
+-------------------------------+----------------------+----------------------+

+-----------------------------------------------------------------------------+
| Processes:                                                       GPU Memory |
|  GPU       PID   Type   Process name                             Usage      |
|=============================================================================|
|  No running processes found                                                 |
+-----------------------------------------------------------------------------+

I running a box via ESXI and ubuntu 16.04.6 and doing so allowed the `nvidia-smi` to run and no issue with the docker container.

I hope this helps, happy transcoding!

- Baraka
Post automatically merged:

Forgot to add if you haven't enabled the iGPU setting with pgblitz do so otherwise,
go to portainer
then go to Runtime & Resources tab
find runtime and change it to nvidia

on your box run

Code:
nvidia-smi --query-gpu=gpu_name,gpu_bus_id,gpu_uuid --format=csv,noheader | sed -e s/00000000://g | sed 's/\,\ /\n/g'
copy your GPU id

go back to portainer

go to the ENV tab and add NVIDIA_VISIBLE_DEVICES=YOURGPUID

deploy the container and run a movie to transcode

then on your box run
Code:
watch nvidia-smi
you should see

Code:
Every 2.0s: nvidia-smi                                                                                                                                                                                                          Mon Dec  2 19:40:27 2019

Mon Dec  2 19:40:27 2019
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 440.33.01    Driver Version: 440.33.01    CUDA Version: 10.2     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|===============================+======================+======================|
|   0  GeForce GTX 750 Ti  Off  | 00000000:0B:00.0 Off |                  N/A |
| 40%   42C    P0     1W /  38W |    265MiB /  2002MiB |      0%      Default |
+-------------------------------+----------------------+----------------------+

+-----------------------------------------------------------------------------+
| Processes:                                                       GPU Memory |
|  GPU       PID   Type   Process name                             Usage      |
|=============================================================================|
|    0     18714      C   /usr/lib/plexmediaserver/Plex Transcoder     254MiB |
+-----------------------------------------------------------------------------+
Your welcome,
-Baraka
 
Last edited:

Create an account or login to comment

You must be a member in order to leave a comment

Create account

Create an account on our community. It's easy!

Log in

Already have an account? Log in here.


Maintenance Donations

Recommend NewsGroups

      Up To a 58% Discount!

Trending