How to Install an Older NVIDIA Driver in Proxmox 9

I brought my very old PC back to life with Proxmox. The GPU is a NVIDIA GeForce GTX 660, and IOMMU is not available. So, no GPU passthrough—only LXC containers are an option. The only solution is to install the NVIDIA driver on the Proxmox VE (pve) host itself. In my case, I needed to download driver version 470, which is now deprecated. After spending quite some time troubleshooting, I identified multiple issues, as well as the easiest way to fix them, which I want to share here.

Prerequisites

Download the Correct Proxmox Kernel for the NVIDIA Driver

Proxmox 9 ships with the latest kernels by default. In my case, it was 6.14.8-2-pve, which doesn’t support the older NVIDIA driver version. Therefore, we need to downgrade. I used 5.11.22-7-pve, since 5.10.6-1-pve caused build errors due to a kernel bug.

First, make older Proxmox kernel versions available by adding the following repositories to /etc/apt/sources.list:

deb http://download.proxmox.com/debian/pve bullseye pve-no-subscription
deb http://download.proxmox.com/debian/pve bullseye pvetest

Download the release key to /etc/apt/trusted.gpg.d/:

wget https://enterprise.proxmox.com/debian/proxmox-release-bullseye.gpg

Now update your package list and search for available kernels:

apt update
apt search pve-kernel

Example output:

pve-firmware/stable,now 3.16-3 all [installed]
pve-kernel-5.10.6-1-pve/stable,stable,now 5.10.6-1 amd64
...
pve-kernel-5.11.22-7-pve/stable,stable 5.11.22-7 amd64
...

Choose the correct kernel version and install it:

apt install pve-kernel-5.11.22-7-pve

After installation, you will need to reboot.

To select the newly installed kernel for boot, use:

proxmox-boot-tool kernel add pve-kernel-5.11.22-7-pve
proxmox-boot-tool kernel pin 5.11.22-7-pve
proxmox-boot-tool refresh

Now reboot your host and confirm you’re running the correct kernel:

uname -r

For the GPU driver to compile, you'll also need matching headers:

apt install pve-headers-$(uname -r)

Install Your NVIDIA GPU Driver

If you haven't already, download the latest supported NVIDIA driver for your GPU (not necessarily the latest driver, but the newest one that supports your hardware):

wget https://us.download.nvidia.com/XFree86/Linux-x86_64/470.256.02/NVIDIA-Linux-x86_64-470.256.02.run
chmod +x NVIDIA-Linux-x86_64-470.256.02.run

You need the correct version of gcc to build the kernel module. The installer will warn you if the GCC version doesn't match the one used to build your kernel. For my kernel, GCC 10 was required:

apt install gcc-10

To switch between installed GCC versions (for example, between version 14 and 10), use:

update-alternatives --install /usr/bin/gcc gcc /usr/bin/gcc-10 10
update-alternatives --install /usr/bin/gcc gcc /usr/bin/gcc-14 20
update-alternatives --install /usr/bin/cc cc /usr/bin/gcc 30
update-alternatives --set cc /usr/bin/gcc
update-alternatives --config gcc

You will see a menu like:

There are 2 choices for the alternative gcc (providing /usr/bin/gcc).
  Selection    Path             Priority   Status
------------------------------------------------------------
* 0            /usr/bin/gcc-14   20        auto mode
  1            /usr/bin/gcc-10   10        manual mode
  2            /usr/bin/gcc-14   20        manual mode
Press <enter> to keep the current choice[*], or type selection number: 

Enter 1 and press Enter to select GCC 10. Confirm with:

gcc --version

Now run the installer, specifying the kernel header source:

./NVIDIA-Linux-x86_64-470.256.02.run --kernel-source-path /usr/src/linux-headers-5.11.22-7-pve

It’s important to provide the correct --kernel-source-path, matching your currently running kernel.

Follow the prompts in the installer. Notes: 1. Do not select dkms unless you plan to upgrade the kernel later. (You can try DKMS if you'd like, but I haven’t tested it.) 2. If asked:

Would you like to run the nvidia-xconfig utility to automatically update your X configuration file so that the NVIDIA X driver will be used when you restart X? Any pre-existing X configuration file will be backed up.

Simply select No.

After the installation completes, verify with:

nvidia-smi

Example output:

+-----------------------------------------------------------------------------+
| NVIDIA-SMI 470.256.02   Driver Version: 470.256.02   CUDA Version: 11.4     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|                               |                      |               MIG M. |
|===============================+======================+======================|
|   0  NVIDIA GeForce ...  Off  | 00000000:01:00.0 N/A |                  N/A |
| 36%   47C    P0    N/A /  N/A |      0MiB /  1998MiB |     N/A      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+

+-----------------------------------------------------------------------------+
| Processes:                                                                  |
|  GPU   GI   CI        PID   Type   Process name                  GPU Memory |
|        ID   ID                                                   Usage      |
|=============================================================================|
|  No running processes found                                                 |
+-----------------------------------------------------------------------------+

Congratulations, your NVIDIA GPU now works on your Proxmox setup!

Add GPU capabilities to an LXC Container

The most difficult part is now finished. Now we need to bind the GPU to a new LXC container. First, create a new container: open your Proxmox VE interface, click the Create CT button at the top right of your screen, and follow the instructions to complete the process.

Next, enter the shell of your host and check for the GPU device files:

ls -al /dev/nvidia*

Example output:

crw-rw-rw- 1 root root 195,   0 Aug 16 12:36 /dev/nvidia0
crw-rw-rw- 1 root root 195, 255 Aug 16 12:36 /dev/nvidiactl
/dev/nvidia-caps:
total 0
drwxr-xr-x  2 root root     80 Aug 16 12:36 .
drwxr-xr-x 22 root root   4440 Aug 16 14:27 ..
cr--------  1 root root 238, 1 Aug 16 12:36 nvidia-cap1
cr--r--r--  1 root root 238, 2 Aug 16 12:36 nvidia-cap2

There are two entries missing, especially the uvm ones.

You must add to /etc/modules-load.d/modules.conf:

nvidia  
nvidia_uvm

and to /etc/udev/rules.d/70-nvidia.rules:

KERNEL=="nvidia", RUN+="/bin/bash -c '/usr/bin/nvidia-smi -L && /bin/chmod 666 /dev/nvidia*'"
KERNEL=="nvidia_uvm", RUN+="/bin/bash -c '/usr/bin/nvidia-modprobe -c0 -u && /bin/chmod 0666 /dev/nvidia-uvm*'"

Then run:

update-initramfs -u -k all

Take note of the device numbers shown in ls -al /dev/nvidia*. Next, edit your LXC container’s configuration file at /etc/pve/nodes/pve/lxc/<VMID>.conf, replacing <VMID> with your container’s ID (for example, 101.conf for container 101).

Append the following entries to the config file:

lxc.cgroup2.devices.allow: c 195:* rwm
lxc.cgroup2.devices.allow: c 507:* rwm
lxc.mount.entry: /dev/nvidia0 dev/nvidia0 none bind,optional,create=file
lxc.mount.entry: /dev/nvidiactl dev/nvidiactl none bind,optional,create=file
lxc.mount.entry: /dev/nvidia-uvm dev/nvidia-uvm none bind,optional,create=file
lxc.mount.entry: /dev/nvidia-uvm-tools dev/nvidia-uvm-tools none bind,optional,create=file

In the LXC container, we need to use the same NVIDIA installer again, but this time without building kernel modules. Copy the installer into the container and then run it with the --no-kernel-module parameter:

pct push 101 ./NVIDIA-Linux-x86_64-470.256.02.run /root/NVIDIA-Linux-x86_64-470.256.02.run
lxc-attach --name 101
chmod +x NVIDIA-Linux-x86_64-470.256.02.run
./NVIDIA-Linux-x86_64-470.256.02.run --no-kernel-module

After the installation finishes, check with nvidia-smi inside the container. If everything is correct, you’re done!

Sources