Setting Up NVIDIA CUDA Toolkit in a Docker Container on Debian/Ubuntu

Harnessing the power of NVIDIA GPUs on Debian and Ubuntu systems often requires navigating a maze of configurations and dependencies. NVIDIA’s CUDA Toolkit, essential for GPU-accelerated tasks, can simplify this with Docker. By containerizing the toolkit, developers ensure a consistent, streamlined, and optimized environment across systems. In this guide, we’ll detail the steps to seamlessly integrate the CUDA Toolkit within a Docker container for these popular Linux distributions.

In this tutorial you will learn:

  • How to set up Docker on Debian and Ubuntu for GPU compatibility.
  • The essentials of NVIDIA’s CUDA Toolkit and its importance for GPU-accelerated tasks.
  • Steps to integrate the CUDA Toolkit into a Docker container seamlessly.
  • Best practices for maintaining and updating your CUDA-enabled Docker environment.
  • Troubleshooting common issues and ensuring optimal GPU performance.
Setting Up NVIDIA CUDA Toolkit in a Docker Container on Debian/Ubuntu
Setting Up NVIDIA CUDA Toolkit in a Docker Container on Debian/Ubuntu


Software Requirements and Linux Command Line Conventions
Category Requirements, Conventions or Software Version Used
System Debian/Ubuntu Linux
Software Installed Nvidia Drivers
Other Privileged access to your Linux system as root or via the sudo command.
Conventions # – requires given linux commands to be executed with root privileges either directly as a root user or by use of sudo command
$ – requires given linux commands to be executed as a regular non-privileged user

Preparing for GPU-Accelerated Docker Environments on Debian and Ubuntu

With the NVIDIA drivers already in place on your Debian or Ubuntu system, you’re halfway through the foundational setup for GPU-accelerated tasks. In this guide, we will leverage this foundation to seamlessly integrate NVIDIA’s CUDA Toolkit within a Docker container. This containerized environment will provide you with a consistent, streamlined platform for all your GPU computations, ensuring you can focus on the task at hand without wrestling with configurations. Let’s dive in and get that powerful GPU truly working for you.

  1. Installation of NVIDIA Drivers
    Before diving into the integration of the CUDA Toolkit with Docker, it’s imperative to have the NVIDIA drivers correctly installed on your system. These drivers act as the bridge between your operating system and the NVIDIA GPU hardware, ensuring optimal communication and performance.If you’re using Debian or Ubuntu and haven’t installed the NVIDIA drivers yet, or if you’re unsure about your current installation, we’ve got you covered.For Debian users, follow our comprehensive Debian NVIDIA Installation Guide and Ubuntu enthusiasts can refer to the detailed Ubuntu NVIDIA Installation How-To.Ensure you’ve successfully installed and verified the drivers before proceeding to the next steps. This foundation is crucial for the seamless operation of GPU-accelerated tasks in Docker.
  2. Installation of Docker
    To containerize the NVIDIA CUDA Toolkit, Docker is our platform of choice. Debian and Ubuntu conveniently provide Docker in their repositories, making the installation process straightforward. By installing Docker, you’re equipping your system with the capability to create, deploy, and run applications in containers, ensuring consistent environments.Here are the commands to install Docker:Update the APT package index

    # apt update

    Install Docker using the package

    apt install

    Verify Docker’s status and version

    # systemctl status docker
    docker --version

    Upon running the last command, you should see a version close to 20.10.24 or newer. With Docker now on board, we’re all set to move forward with our GPU-accelerated Docker environment setup.

  3. Installation of Docker
    To run GPU-accelerated Docker containers, we’ll need the NVIDIA Container Toolkit. This toolkit extends Docker to leverage NVIDIA GPUs fully, ensuring that the GPU capabilities can be used within containers without any hitches. By following the commands below, we’ll be setting up our Debian or Ubuntu system to recognize and properly interact with NVIDIA GPUs from within Docker containers. Execute the following commands in sequence:Download the NVIDIA GPG key

    $ curl -fsSL -o /tmp/nvidia-gpgkey

    Dearmor the GPG key and save it

    # gpg --dearmor -o /usr/share/keyrings/nvidia-container-toolkit-keyring.gpg /tmp/nvidia-gpgkey

    Download the NVIDIA container toolkit list file

    $ curl -s -L -o /tmp/nvidia-list

    Modify the list file to include the signature

    # sed 's#deb https://#deb [signed-by=/usr/share/keyrings/nvidia-container-toolkit-keyring.gpg] https://#g' /tmp/nvidia-list > /etc/apt/sources.list.d/nvidia-container-toolkit.list

    Update the package database

    # apt-get update

    After executing these commands, you’ve set the stage for the NVIDIA Container Toolkit, which will be vital in our next steps to fully integrate the CUDA Toolkit within a Docker container.

  4. Configuring Docker for NVIDIA Support
    Having NVIDIA Container Toolkit in place, the next essential task is configuring Docker to recognize and utilize NVIDIA GPUs.Configure the Docker runtime to use NVIDIA Container Toolkit by using the nvidia-container-cli command, you’ll modify Docker’s configuration to use NVIDIA’s runtime:

    # nvidia-container-cli configure --runtime=docker

    Behind the scenes, this command makes alterations to the /etc/docker/daemon.json file. As a result, Docker becomes aware of the NVIDIA runtime and can access GPU features. After modifying the Docker configuration, it’s vital to restart the Docker daemon for changes to take effect:

    # systemctl restart docker
  5. Running the NVIDIA CUDA Docker Image
    With all the required setups in place, the exciting part begins: running a Docker container with NVIDIA GPU support. NVIDIA maintains a series of CUDA images on Docker Hub. For this tutorial, we’ll be using the 12.2.0-base-ubuntu22.04 tag. However, always ensure to check for the latest tags at NVIDIA CUDA Docker Hub to stay updated.
    Pull the specific NVIDIA CUDA image:

    # docker pull nvidia/cuda:12.2.0-base-ubuntu22.04

    Run the Docker container with GPU support:

    # docker run --gpus all -it nvidia/cuda:12.2.0-base-ubuntu22.04 bash

    This command runs the Docker container with full GPU access (--gpus all) and provides an interactive shell inside the container. Once inside, you can use NVIDIA utilities like nvidia-smi to confirm GPU access.

    Running the NVIDIA CUDA Docker Image
    Running the NVIDIA CUDA Docker Image

Maintaining Your CUDA-enabled Docker Environment

In a rapidly evolving tech landscape, it’s crucial to keep your CUDA-enabled Docker environment up-to-date. Regularly check the NVIDIA CUDA Docker Hub for image updates. Stay informed about NVIDIA and Docker releases, manage GPU resources judiciously when running multiple containers, backup essential configurations, and actively engage with the community for tips and best practices. Staying proactive will ensure optimal performance and security for your GPU-accelerated projects.

Troubleshooting and Ensuring GPU Performance

Encountering hiccups is inevitable in any technical endeavor, and a CUDA-enabled Docker environment is no exception. When faced with issues, revisit your setup steps to catch any oversights. Use tools like nvidia-smi inside your container to verify GPU accessibility. Ensure your NVIDIA drivers are compatible with the CUDA version you’re deploying. Lastly, monitor GPU usage to prevent bottlenecks. Being attentive to these areas will pave the way for a smoother experience and optimal GPU performance.


Embracing the powerful synergy between NVIDIA’s CUDA platform and Docker offers a robust and flexible environment for GPU-accelerated applications. While the setup process may seem intricate initially, the benefits in scalability, reproducibility, and performance are unmatched. By diligently following the outlined steps and adhering to best practices, you’ll be well-prepared to harness the full potential of your GPU. As with any tech journey, continuous learning and engagement with the community will serve as valuable assets, ensuring that you remain at the forefront of GPU computing advancements.