Skip to content

Commit

Permalink
Improve nvidia support usage notes
Browse files Browse the repository at this point in the history
  • Loading branch information
jocado committed Jun 10, 2024
1 parent 21f6631 commit ce39721
Showing 1 changed file with 22 additions and 2 deletions.
24 changes: 22 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -62,10 +62,30 @@ If the system is found to have an nvidia graphics card available, and the host h

To enable proper use of the GPU within docker, the nvidia runtime must be used. By default, the nvidia runtime will be configured to use ([CDI](https://github.com/cncf-tags/container-device-interface)) mode, and a the appropriate nvidia CDI config will be automatically created for the system. You just need to specify the nvidia runtime when running a container.

Example usage:
### Usage examples

Generic example usage would look something like:

```shell
docker run --rm --runtime nvidia --gpus all {cuda-container-image-name}
```

or

```shell
docker run --rm --runtime nvidia {cuda-container-image-name}
docker run --rm --runtime nvidia --env NVIDIA_VISIBLE_DEVICES=all {cuda-container-image-name}
```

If your container image already has appropriate environment variables set, may be able to just specify the nvidia runtime with no additional args required.

Please refer to this guide for mode detail regarding environment variables that can be used.

*NOTE*: library path and discovery is automatically handled, but binary paths are not, so if you wish to test using something like `nviida-smi` you could can either specify the full path of set the PATH environment variable.

e.g.

```
docker run --rm --runtime=nvidia --gpus all --env PATH="${PATH}:/var/lib/snapd/hostfs/usr/bin" ubuntu nvidia-smi
```

### Ubuntu Core 22
Expand Down

0 comments on commit ce39721

Please sign in to comment.