Skip to content

Instantly share code, notes, and snippets.

@tomlankhorst
Last active May 6, 2024 20:59
Show Gist options
  • Star 30 You must be signed in to star a gist
  • Fork 9 You must be signed in to fork a gist
  • Save tomlankhorst/33da3c4b9edbde5c83fc1244f010815c to your computer and use it in GitHub Desktop.
Save tomlankhorst/33da3c4b9edbde5c83fc1244f010815c to your computer and use it in GitHub Desktop.
Instructions for Docker swarm with GPUs

Setting up a Docker Swarm with GPUs

Installing Docker

Official instructions.

Add yourself to the docker group to be able to run containers as non-root (see Post-install steps for Linux).

sudo groupadd docker
sudo usermod -aG docker $USER

Verify with docker run hello-world.

Installing the NVidia Container Runtime

Official instructions.

Start by installing the appropriate NVidia drivers. Then continue to install NVidia Docker.

Verify with docker run --gpus all,capabilities=utility nvidia/cuda:10.0-base nvidia-smi.

Configuring Docker to work with your GPU(s)

The first step is to identify the GPU(s) available on your system. Docker will expose these as 'resources' to the swarm. This allows other nodes to place services (swarm-managed container deployments) on your machine.

These steps are currently for NVidia GPUs.

Docker identifies your GPU by its Universally Unique IDentifier (UUID). Find the GPU UUID for the GPU(s) in your machine.

nvidia-smi -a

A typical UUID looks like GPU-45cbf7b3-f919-7228-7a26-b06628ebefa1. Now, only take the first two dash-separated parts, e.g.: GPU-45cbf7b3.

Open up the Docker engine configuration file, typically at /etc/docker/daemon.json.

Add the GPU ID to the node-generic-resources. Make sure that the nvidia runtime is present and set the default-runtime to it. Make sure to keep other configuration options in-place, if they are there. Take care of the JSON syntax, which is not forgiving of single quotes and lagging commas.

{
  "runtimes": {
    "nvidia": {
      "path": "/usr/bin/nvidia-container-runtime",
      "runtimeArgs": []
    }
  },
  "default-runtime": "nvidia",
  "node-generic-resources": [
    "gpu=GPU-45cbf7b"
    ]
}

Now, make sure to enable GPU resource advertisting by adding or uncommenting the following in /etc/nvidia-container-runtime/config.toml

swarm-resource = "DOCKER_RESOURCE_GPU"

Restart the service.

sudo systemctl restart docker.service

Initializing the Docker Swarm

Initialize a new swarm on a manager-to-be.

docker swarm init

Add new nodes (slaves), or manager-nodes (shared masters). Run the following command on a node that is already part of the swarm:

docker swarm join-token (worker|manager)

Then, run the resulting command on a member-to-be.

Show who's in the swarm:

docker node ls

A first deployment

docker service create --replicas 1 \
  --name tensor-qs \
  --generic-resource "gpu=1" \
  tomlankhorst/tensorflow-quickstart

This deploys a TensorFlow quick start image, that follows the quick start.

Show active services:

docker service ls

Inspect the service

$ docker service inspect --pretty tensor-qs
ID:             vtjcl47xc630o6vndbup64c1i
Name:           tensor-qs
Service Mode:   Replicated
 Replicas:      1
Placement:
UpdateConfig:
 Parallelism:   1
 On failure:    pause
 Monitoring Period: 5s
 Max failure ratio: 0
 Update order:      stop-first
RollbackConfig:
 Parallelism:   1
 On failure:    pause
 Monitoring Period: 5s
 Max failure ratio: 0
 Rollback order:    stop-first
ContainerSpec:
 Image:         tomlankhorst/tensorflow-quickstart:latest@sha256:1f793df87f00478d0c41ccc7e6177f9a214a5d3508009995447f3f25b45496fb
 Init:          false
Resources:
Endpoint Mode:  vip

Show the logs

$ docker service logs tensor-qs
...
tensor-qs.1.3f9jl1emwe9l@tlws    | 2020-03-16 08:45:15.495159: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10.0
tensor-qs.1.3f9jl1emwe9l@tlws    | 2020-03-16 08:45:15.621767: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7
tensor-qs.1.3f9jl1emwe9l@tlws    | Epoch 1, Loss: 0.132665216923, Accuracy: 95.9766693115, Test Loss: 0.0573637597263, Test Accuracy: 98.1399993896
tensor-qs.1.3f9jl1emwe9l@tlws    | Epoch 2, Loss: 0.0415383689106, Accuracy: 98.6949996948, Test Loss: 0.0489368513227, Test Accuracy: 98.3499984741
tensor-qs.1.3f9jl1emwe9l@tlws    | Epoch 3, Loss: 0.0211332384497, Accuracy: 99.3150024414, Test Loss: 0.0521399155259, Test Accuracy: 98.2900009155
tensor-qs.1.3f9jl1emwe9l@tlws    | Epoch 4, Loss: 0.0140329506248, Accuracy: 99.5716705322, Test Loss: 0.053688980639, Test Accuracy: 98.4700012207
tensor-qs.1.3f9jl1emwe9l@tlws    | Epoch 5, Loss: 0.00931495986879, Accuracy: 99.7116699219, Test Loss: 0.0681483447552, Test Accuracy: 98.1500015259
@mdailey
Copy link

mdailey commented May 31, 2020

It seems that the swarm-resource option is now deprecated in the latest nvidia-container-runtime package. On Ubuntu 16.04 with nvidia-container-runtime package version 3.2.0-1 and nvidia-container-toolkit package version 1.1.1-1, uncommenting the swarm-resource line in /etc/nvidia-container-runtime/config.toml breaks my swarm services.

@RafaelWO
Copy link

RafaelWO commented Aug 11, 2020

I got this solution to work with some changes:

  1. Change "gpu=GPU-45cbf7b" to "NVIDIA-GPU=GPU-45cbf7b" in the file /etc/docker/daemon.json
  2. Start the service with the arg --generic-resource "NVIDIA-GPU=0"

References:
https://docs.docker.com/engine/reference/commandline/dockerd/#miscellaneous-options
https://docs.docker.com/engine/reference/commandline/service_create/#create-services-requesting-generic-resources

@julienschuermans
Copy link

julienschuermans commented Feb 22, 2021

I got this solution to work with some changes:

  1. Change "gpu=GPU-45cbf7b" to "NVIDIA-GPU=GPU-45cbf7b" in the file /etc/docker/daemon.json
  2. Start the service with the arg --generic-resource "NVIDIA-GPU=0"

References:
https://docs.docker.com/engine/reference/commandline/dockerd/#miscellaneous-options
https://docs.docker.com/engine/reference/commandline/service_create/#create-services-requesting-generic-resources

Thanks @RafaelWO, this worked for me!

@maaft
Copy link

maaft commented Mar 31, 2021

Hi! I have multiple GPUs on my server and added 2 out of 8 to node-generic-ressources in /etc/docker/daemon.json.

When I deploy my image with: docker service create --replicas 2 --name swarm-test --generic-resource "NVIDIA-GPU=1" swarm-test both containers use the same GPU.

Furthermore, nvidia-smi still shows all 8 GPUs (although only 2 are present in daemon.json). Is this file somehow ignored?

Instead I want each replica to use a dedicated GPU. How can I achieve this?

@nanotower
Copy link

Swarm with nvidia it´s a mess. Poor documentation and even today we haven´t any straight steps to make it work properly.

I have two instances in google compute engine, both with nvidia tesla t4. Suddenly one doesn´t work. Cuda in swarm is gone. The other one, with exactly the same config, it´s working. I have checked nvidia uuid "with nvidia-smi -a" and both have changed it. daemon.json has an older uuid in both vms, but one is working and the other don´t. ¿Can someone explain it?
I can understand that google can change the hardware across start and stop cycles. But why it´s working if daemon has a different uuid?

@PaSteEsc
Copy link

Hi! I have multiple GPUs on my server and added 2 out of 8 to node-generic-ressources in /etc/docker/daemon.json.

When I deploy my image with: docker service create --replicas 2 --name swarm-test --generic-resource "NVIDIA-GPU=1" swarm-test both containers use the same GPU.

Furthermore, nvidia-smi still shows all 8 GPUs (although only 2 are present in daemon.json). Is this file somehow ignored?

Instead I want each replica to use a dedicated GPU. How can I achieve this?

hey @maaft. did you find any solution?

@rkasigi
Copy link

rkasigi commented Sep 2, 2021

Thank You!

It is worked on:
AWS g4dn.xlarge
Ubuntu 20.04.2 LTS
Docker 20.10.8

@rogerbramon
Copy link

Thanks! Very useful. In my case I had to use the complete UUID, otherwhise it was not able to identify the GPU.

@edwardnguyen1705
Copy link

Hello,
Have you ever tried to create a service/node running on 2 GPUs: "NVIDIA-GPU=1,2"

@coltonbh
Copy link

Amazing documentation. Thank you! I ripped it off and added another approach I've used for GPU support on Swam at the link below. Credit to this Gist for collecting the documentation and commentary :)

https://gist.github.com/coltonbh/374c415517dbeb4a6aa92f462b9eb287

@nie3e
Copy link

nie3e commented Apr 15, 2023

Can i somehow mark one GPU to be able to use multiple times?
This doesn't work:

    "node-generic-resources": [
      "NVIDIA-GPU=GPU-9c9e183c",
      "NVIDIA-GPU=GPU-9c9e183c",
      "NVIDIA-GPU=GPU-9c9e183c",
      "NVIDIA-GPU=GPU-9c9e183c"
     ]

Trying:

docker service create --replicas 2   --name tensor-qs   --generic-resource "NVIDIA-GPU=1"   tomlankhorst/tensorflow-quickstart

Gives:

overall progress: 1 out of 2 tasks
1/2: no suitable node (insufficient resources on 1 node)
2/2: running   [==================================================>]

Quick edit:
My GPU UUID is GPU-9c9e183c-e6f4-1ebd-d775-2cf59c99bb1b

and if i modify daemon.json to this:

"node-generic-resources": [
      "NVIDIA-GPU=GPU-9c9e183c",
      "NVIDIA-GPU=GPU-9c9e183c-e6f4-1ebd-d775-2cf59c99bb1b",
      "NVIDIA-GPU=GPU-9c9e183c-e",
     ]

it is fine

Edit: Nope, it's not fine

@rznas
Copy link

rznas commented Feb 23, 2024

The above mentioned things did not work for me.
As main discussion for the swarm-resource = "DOCKER_RESOURCE_GPU", the GPU part is the generic-resource name (capitalize form of the name) and the gpu uuid should be mentioned completely (GPU-d8eaf8be-5e85-1a6d-6f9f-82fda3dbb7d1).

So in my case,

  • in /etc/docker/daemon.json:
{
    "runtimes": {
        "nvidia": {
            "path": "/usr/bin/nvidia-container-runtime",
            "runtimeArgs": []
        }
    },
    "default-runtime": "nvidia",
    "node-generic-resources": [
        "NVIDIAGPU=GPU-d8eaf8be-5e85-1a6d-6f9f-82fda3dbb7d1"
    ]
}
  • in /etc/nvidia-container-runtime/config.toml:
swarm-resource = "DOCKER_RESOURCE_NVIDIAGPU"
  • in service definition: --generic-resource "NVIDIAGPU=1"

@lyze237
Copy link

lyze237 commented Apr 26, 2024

Hey @nie3e did you ever figure out how to share a gpu across multiple containers?

I've tried modifying it the way you mentioned:

{
    "default-runtime": "nvidia",
    "runtimes": {
        "nvidia": {
            "path": "/usr/bin/nvidia-container-runtime",
            "runtimeArgs": []
        }
    },
    "node-generic-resources": [
            "GPU=GPU-53dea362-0606-18ae-bbc7-02e855807511",
            "GPU=GPU-53dea362"
    ]
}

but I either got the same error that it can't find a suitable node or the following one:

starting container failed: failed to create task for container: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: error running hook #1: error running hook: exit status 1, stdout: , stderr: Auto-detected mode as 'legacy' nvidia-container-cli: device error: GPU-53dea362: unknown device: unknown

Any ideas?

@nie3e
Copy link

nie3e commented Apr 26, 2024

@lyze237 Hello. Unfortunately not :( If I need to share one GPU I am using docker compose for now, and list every service separately.

@coltonbh
Copy link

coltonbh commented May 6, 2024

@lyze237 you can share GPUs across containers by not requesting them as resources--which can allocate a resource to only a single container--but just running them without declaring resources but using all GPUs (all GPUs are seen by default in all containers on a node) and then if you want to limit which ones a container uses specify the same NVIDIA_VISIBLE_DEVICE numbers as environment variables for those containers (assuming you don't want the containers to use all the GPUs). This would be Solution 1 that I wrote up here: https://gist.github.com/coltonbh/374c415517dbeb4a6aa92f462b9eb287

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment