diff --git a/README.md b/README.md index 2d4ca49..86b9f69 100644 --- a/README.md +++ b/README.md @@ -62,10 +62,30 @@ If the system is found to have an nvidia graphics card available, and the host h To enable proper use of the GPU within docker, the nvidia runtime must be used. By default, the nvidia runtime will be configured to use ([CDI](https://github.com/cncf-tags/container-device-interface)) mode, and a the appropriate nvidia CDI config will be automatically created for the system. You just need to specify the nvidia runtime when running a container. -Example usage: +### Usage examples + +Generic example usage would look something like: + +```shell +docker run --rm --runtime nvidia --gpus all {cuda-container-image-name} +``` + +or ```shell -docker run --rm --runtime nvidia {cuda-container-image-name} +docker run --rm --runtime nvidia --env NVIDIA_VISIBLE_DEVICES=all {cuda-container-image-name} +``` + +If your container image already has appropriate environment variables set, may be able to just specify the nvidia runtime with no additional args required. + +Please refer to this guide for mode detail regarding environment variables that can be used. + +*NOTE*: library path and discovery is automatically handled, but binary paths are not, so if you wish to test using something like `nviida-smi` you could can either specify the full path of set the PATH environment variable. + +e.g. + +``` +docker run --rm --runtime=nvidia --gpus all --env PATH="${PATH}:/var/lib/snapd/hostfs/usr/bin" ubuntu nvidia-smi ``` ### Ubuntu Core 22