Docker: Exposing GPU
Nov 27, 2014Few months ago I needed to access the GPU device from my Docker host machine down to the container, there’s not a ton of information on how to achieve this yet so I thought I’d share how I made it work. Note that this isn’t going to be a Docker tutorial so a working knowledge about it is required.
First some information on the host, I have a PC at home that’s CUDA capable running GeForce GT 630. Then downloaded the 64 bit driver for Ubuntu 14.04 from CUDA Downloads. So we know Docker has build and runtime phase, in my research there’s just no way to expose the device from a Dockerfile, but modern versions supports the flag –privileged which gives extended privileges to the container, giving us access to the GPU. But once the GPU is accessible from inside the container, it still needs the driver to make it work.
So here’s the plan, we spawn a new container with –privileged then together with it we mount the driver installer we downloaded above, then install the driver inside the container.
docker run \
-it \
--privileged \
-v /home/marconi/cuda_6.5.14_linux_64.run:/root/cuda.run \
ubuntu:14.04
Next we run the installer:
sh /root/cuda.run \
-verbose \
-samples \
-samplespath=/root \
-silent \
-driver \
-toolkit
Next, CUDA expects some environment variables to be set:
export CUDA_HOME=/usr/local/cuda-6.5
echo "export CUDA_HOME=$CUDA_HOME" >> /.bashrc
echo "export LD_LIBRARY_PATH=$CUDA_HOME/lib64" >> /.bashrc
echo "export PATH=$CUDA_HOME/bin:$PATH" >> /.bashrc
source /.bashrc
Then we can test to see if its actually working:
cd /root/NVIDIA_CUDA-6.5_Samples/1_Utilities/deviceQuery && \
make && ./deviceQuery
You should see some information about your GPU printed out like:
CUDA Device Query (Runtime API) version (CUDART static linking)
Detected 1 CUDA Capable device(s)
Device 0: "GeForce GT 630"
CUDA Driver Version / Runtime Version 6.5 / 6.5
CUDA Capability Major/Minor version number: 3.5
...
We have successfully accessed the GPU inside the container!