For using the NVIDIA runtime, additional configuration is required. The following options should be added to configure nvidia as a runtime and use systemd as the cgroup driver. A patch is provided below:
sudo ctr run --rm --gpus 0 -t docker.io/nvidia/cuda:11.0-base cuda-11.0-base nvidia-smi
You should see an output similar to the one shown below:
Tensorflow Docker
Download a TensorFlow Docker image
The official TensorFlow Docker images are located in the tensorflow/tensorflow Docker Hub repository. Image releases are tagged using the following format:
Tag
Description
latest
The latest release of TensorFlow CPU binary image. Default.
nightly
Nightly builds of the TensorFlow image. (Unstable.)
version
Specify the version of the TensorFlow binary image, for example: 2.1.0
devel
Nightly builds of a TensorFlow master development environment. Includes TensorFlow source code.
custom-op
Special experimental image for developing TF custom ops. More info here.
Each base tag has variants that add or change functionality:
Tag Variants
Description
tag-gpu
The specified tag release with GPU support. (See below)
tag-jupyter
The specified tag release with Jupyter (includes TensorFlow tutorial notebooks)
You can use multiple variants at once. For example, the following downloads TensorFlow release images to your machine:
docker pull tensorflow/tensorflow # latest stable release
docker pull tensorflow/tensorflow:devel-gpu # nightly dev release w/ GPU support
docker pull tensorflow/tensorflow:latest-gpu-jupyter # latest release w/ GPU support and Jupyter
Start a TensorFlow Docker container
docker run [-it] [--rm] [-p hostPort:containerPort] tensorflow/tensorflow[:tag] [command]
CPU-only images:
docker run -it --rm tensorflow/tensorflow \
python -c "import tensorflow as tf; print(tf.reduce_sum(tf.random.normal([1000, 1000])))"
Start a bash shell session within a TensorFlow-configured container:
docker run -it tensorflow/tensorflow bash
To run a TensorFlow program developed on the host machine within a container, mount the host directory and change the container’s working directory (-v hostDir:containerDir -w workDir):
The official TensorFlow Docker images are located in the tensorflow/tensorflow Docker Hub repository. Image releases are tagged using the following format:
Tag
Description
latest
The latest release of TensorFlow CPU binary image. Default.
nightly
Nightly builds of the TensorFlow image. (Unstable.)
version
Specify the version of the TensorFlow binary image, for example: 2.1.0
devel
Nightly builds of a TensorFlow master development environment. Includes TensorFlow source code.
custom-op
Special experimental image for developing TF custom ops. More info here.
Each base tag has variants that add or change functionality:
Tag Variants
Description
tag-gpu
The specified tag release with GPU support. (See below)
tag-jupyter
The specified tag release with Jupyter (includes TensorFlow tutorial notebooks)
Install
You can use multiple variants at once. For example, the following downloads TensorFlow release images to your machine:
docker pull tensorflow/tensorflow # latest stable release
docker pull tensorflow/tensorflow:devel-gpu # nightly dev release w/ GPU support
docker pull tensorflow/tensorflow:latest-gpu-jupyter # latest release w/ GPU support and Jupyter
Start a TensorFlow Docker container
To start a TensorFlow-configured container, use the following command form:
docker run [-it] [--rm] [-p hostPort:containerPort] tensorflow/tensorflow[:tag] [command]
Examples using CPU-only images
docker run -it --rm tensorflow/tensorflow \
python -c "import tensorflow as tf; print(tf.reduce_sum(tf.random.normal([1000, 1000])))"
Let’s demonstrate some more TensorFlow Docker recipes. Start a bash shell session within a TensorFlow-configured container:
docker run -it tensorflow/tensorflow bash
Within the container, you can start a python session and import TensorFlow.
To run a TensorFlow program developed on the host machine within a container, mount the host directory and change the container’s working directory (-v hostDir:containerDir -w workDir):
Docker is the easiest way to run TensorFlow on a GPU since the host machine only requires the NVIDIA® driver (the NVIDIA® CUDA® Toolkit is not required).
Install the Nvidia Container Toolkit to add NVIDIA® GPU support to Docker. nvidia-container-runtime is only available for Linux. See the nvidia-container-runtimeplatform support FAQ for details.
Check if a GPU is available:
lspci | grep -i nvidia
Verify your nvidia-docker installation:
docker run --gpus all --rm nvidia/cuda nvidia-smi
Note:nvidia-docker v2 uses --runtime=nvidia instead of --gpus all. nvidia-docker v1 uses the nvidia-docker alias, rather than the --runtime=nvidia or --gpus all command line flags.
Examples using GPU-enabled images
Download and run a GPU-enabled TensorFlow image (may take a few minutes):
docker run --gpus all -it --rm tensorflow/tensorflow:latest-gpu \
python -c "import tensorflow as tf; print(tf.reduce_sum(tf.random.normal([1000, 1000])))"
Use the latest TensorFlow GPU image to start a bash shell session in the container:
docker run --gpus all -it tensorflow/tensorflow:latest-gpu bash
Use the following command to set up the stable repository. To add the nightly or test repository, add the word nightly or test (or both) after the word stable in the commands below. Learn about nightly and test channels.