Pre-requires
Find NVIDIA GPU Model Name
Prepare proper driver download link
Find the proper driver at the NVidia website
Note: Make sure to select “Linux 64-bit” as your OS
Hit the “Search” button.
Hit the “Download” button.
Right-click the download button and “Copy link address”.
Proxmox Host Set-up
SSH into to your Proxmox instace.
Create the file /etc/modprobe.d/nvidia-installer-disable-nouveau.conf with the following contents:
# generated by nvidia-installer
blacklist nouveau
options nouveau modeset=0
Reboot the machine:
reboot now
Run the following:
apt install build-essential pve-headers-$(uname -r)
wget <link you copied>
chmod +x ./NVIDIA-Linux-x86_64-<VERSION>.run
./NVIDIA-Linux-x86_64-<VERSION>.run
Edit `/etc/modules-load.d/modules.conf` and add the following to the end of the file:
nvidia
nvidia_uvm
Run the following:
update-initramfs -u
Create the file `/etc/udev/rules.d/70-nvidia.rules` and add the following:
# /etc/udev/rules.d/70-nvidia.rules
# Create /nvidia0, /dev/nvidia1 … and /nvidiactl when nvidia module is loaded
KERNEL=="nvidia", RUN+="/bin/bash -c '/usr/bin/nvidia-smi -L && /bin/chmod 666 /dev/nvidia*'"
# Create the CUDA node when nvidia_uvm CUDA module is loaded
KERNEL=="nvidia_uvm", RUN+="/bin/bash -c '/usr/bin/nvidia-modprobe -c0 -u && /bin/chmod 0666 /dev/nvidia-uvm*'"
Reboot the machine.
For each container
SSH into the Proxmox.
Run the following:
modprobe nvidia-uvm
ls /dev/nvidia* -l
Note these numbers, you’ll need them in the next step
Edit `/etc/pve/lxc/<container ID>.conf` and add the following:
lxc.cgroup.devices.allow: c <number from previous step>:* rwm
lxc.cgroup.devices.allow: c <number from previous step>:* rwm
lxc.cgroup.devices.allow: c <number from previous step>:* rwm
lxc.mount.entry: /dev/nvidia0 dev/nvidia0 none bind,optional,create=file
lxc.mount.entry: /dev/nvidiactl dev/nvidiactl none bind,optional,create=file
lxc.mount.entry: /dev/nvidia-uvm dev/nvidia-uvm none bind,optional,create=file
lxc.mount.entry: /dev/nvidia-modeset dev/nvidia-modeset none bind,optional,create=file
lxc.mount.entry: /dev/nvidia-uvm-tools dev/nvidia-uvm-tools none bind,optional,create=file
lxc.mount.entry: /dev/nvidia-uvm-tools dev/nvidia-caps/nvidia-cap1 none bind,optional,create=file
lxc.mount.entry: /dev/nvidia-uvm-tools dev/nvidia-caps/nvidia-cap2 none bind,optional,create=file
NOTE: pve 7.x should use cgroup2
TLDR: lxc.cgroup.devices.allow MUST be changed to lxc.cgroup2.devices.allow
Container/LXC
SSH into your container.
Run the following:
dpkg --add-architecture i386
apt update
apt install libc6:i386
wget <link you copied for the Proxmox step>
chmod +x ./NVIDIA-Linux-x86_64-<VERSION>.run
./NVIDIA-Linux-x86_64-<VERSION>.run --no-kernel-module
Reboot the container.
CUDA
SSH back into your container.
Run the following:
apt install nvidia-cuda-toolkit nvidia-cuda-dev
Note: Plex DOES NOT USE THE GPU until you install CUDA
Plex will pick up the fact that you have a GPU in the install process and will enable the hardware transcoding checkbox, but it will NOT use the GPU until CUDA is installed.
Python/cuDNN
SSH into your container.
Run the following:
apt install python3 python3-dev python3-pip python3-pycuda
Check your CUDA version:
nvidia-smi