If your containers fail to start with errors like:
could not select device driver "nvidia" with capabilities: [[gpu]]
or
could not select device driver "nvidia" with capabilities: [[gpu compute video]]
this is usually a Docker daemon/runtime issue (and in my case, contrary to what I initially assumed, an issue with the container I tried to run, i.e., Immich).
- Docker was running on a daemon/context without NVIDIA runtime support.
- The user running Docker did not have permission to access the native Docker socket (
/var/run/docker.sock) and was not in thedockergroup.
Even with NVIDIA drivers and nvidia-ctk installed, Docker GPU requests fail until both are fixed.
- Add your user to the
dockergroup:
sudo usermod -aG docker $USER
newgrp docker- Switch to the native Docker Engine context:
docker context use default- Ensure NVIDIA runtime is configured for Docker:
sudo nvidia-ctk runtime configure --runtime=docker
sudo systemctl restart docker- Verify Docker can use the GPU before starting Immich (replace
<cuda-tag>with a CUDA tag available for your setup, for example12.4.1):
docker --context default run --rm --gpus all nvidia/cuda:<cuda-tag>-base-ubuntu22.04 nvidia-smiIf this command works, GPU passthrough is fixed.
- Start:
docker --context default compose up -d
docker --context default compose ps- Confirm you are on the right context:
docker context ls- Confirm Docker sees runtimes:
docker info | grep -i runtime -A2- Confirm host GPU works outside Docker:
nvidia-smi- If you are on Docker Desktop Linux, GPU support may not behave like native Docker Engine. Use the native daemon (
defaultcontext) for NVIDIA workloads. - Including
--context defaultevery single time is not necessary once you've set it. - Tested on CachyOS, Nvidia Driver 590.48.01, CUDA 13.1 using the nvidia-container-toolkit 1.18.2-1 from the AUR.