Skip to content

Commit

Permalink
Improve nvidia support usage notes
Browse files Browse the repository at this point in the history
  • Loading branch information
jocado committed Jun 11, 2024
1 parent 21f6631 commit 9fbed5f
Showing 1 changed file with 27 additions and 7 deletions.
34 changes: 27 additions & 7 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -60,13 +60,7 @@ Docker should function normally, with the following caveats:

If the system is found to have an nvidia graphics card available, and the host has the required nvidia libraries installed, the nvidia container toolkit will be setup and configured to enable use of the local GPU from docker. This can be used to enable use of CUDA from a docker container, for instance.

To enable proper use of the GPU within docker, the nvidia runtime must be used. By default, the nvidia runtime will be configured to use ([CDI](https://github.com/cncf-tags/container-device-interface)) mode, and a the appropriate nvidia CDI config will be automatically created for the system. You just need to specify the nvidia runtime when running a container.

Example usage:

```shell
docker run --rm --runtime nvidia {cuda-container-image-name}
```
To enable proper use of the GPU within docker, the nvidia runtime must be used. By default, the nvidia runtime will be configured to use [CDI](https://github.com/cncf-tags/container-device-interface) mode, and a the appropriate nvidia CDI config will be automatically created for the system. You just need to specify the nvidia runtime when running a container.

### Ubuntu Core 22

Expand Down Expand Up @@ -106,6 +100,32 @@ Setting up the nvidia support should be automatic the hardware is present, but y
snap set docker nvidia-support.disabled=true
```

### Usage examples

Generic example usage would look something like:

```shell
docker run --rm --runtime nvidia --gpus all {cuda-container-image-name}
```

or

```shell
docker run --rm --runtime nvidia --env NVIDIA_VISIBLE_DEVICES=all {cuda-container-image-name}
```

If your container image already has appropriate environment variables set, may be able to just specify the nvidia runtime with no additional args required.

Please refer to [this guide](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/docker-specialized.html) for mode detail regarding environment variables that can be used.

*NOTE*: library path and discovery is automatically handled, but binary paths are not, so if you wish to test using something like the `nvidia-smi` binary passed into the container from the host, you could either specify the full path or set the PATH environment variable.

e.g.

```
docker run --rm --runtime=nvidia --gpus all --env PATH="${PATH}:/var/lib/snapd/hostfs/usr/bin" ubuntu nvidia-smi
```

## Development

Developing the `docker` snap package is typically performed on a "classic" Ubuntu distribution. The instructions here are written for Ubuntu 16.04 "Xenial".
Expand Down

0 comments on commit 9fbed5f

Please sign in to comment.