This is running on Linux Mint 20
-
Install docker.
-
sudo apt-get -y install apt-transport-https ca-certificates curl software-properties-common
-
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
-
sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(. /etc/os-release; echo "$UBUNTU_CODENAME") stable"
-
sudo apt-get update
-
sudo apt install docker-ce docker-compose
-
sudo usermod -aG docker $USER
-
and test with
docker --version
-
-
I'm going to use the GPU, so it needs to be exposed to docker with this tool. You will also need to have NVidia drivers running on your machine.
distribution=ubuntu20.04
#base ubuntu namecurl -s -L https://nvidia.github.io/nvidia-docker/gpgkey | sudo apt-key add -
curl -s -L https://nvidia.github.io/nvidia-docker/$distribution/nvidia-docker.list | sudo tee /etc/apt/sources.list.d/nvidia-docker.list
sudo apt-get update
sudo apt-get install -y nvidia-docker2
- Restart the Docker daemon to complete the installation after setting the default runtime:
sudo systemctl restart docker
- Test it by running nvidia-smi inside the docker
sudo docker run --rm --gpus all nvidia/cuda:11.0-base nvidia-smi
. This fails for me because I can't run CUDA 11 with my driver, so I replaced11.0
with10.1
and bingo...
-
There are a number of pytorch dockers here https://hub.docker.com/r/pytorch/pytorch I'm going to test the lastest build with
docker run --gpus all -it --rm --ipc=host -v /localdir/:/containerdir/ --name mypytorchproject pytorch/pytorch:latest
(inpractice I wouldn't use latest, use a numbered version since the latest might break your code when it gets updated.) -
--gpu all
use all the gpus -
--rm
cleans up the contianer after running -
--ipc=host
use the host's interprocess comms. -
-it
run in interactive mode, i.e. get a shell -
once the shell is running test if the gpu is working with
python -c "import torch; device = torch.device('cuda' if torch.cuda.is_available() else 'cpu'); print('Using device:', device); torch.rand(10).to(device)"
which should printcuda
if all is good. -
You can install pre-made docker images with
docker pull
for exampledocker pull jjanzic/docker-python3-opencv
-
To use this in Jetbrains pycharm follow the instructions https://www.jetbrains.com/help/pycharm/using-docker-as-a-remote-interpreter.html The example uses the image
python:latest
but change this to your pytorch image to run pytorch! -
To get the GPU working edit the configuration, and add
--gpus all
to the "Docker container settings" box. You can also add theipc
flag here
-
We now want to process our requirements file so I'm going to make my own docker image to do this.
-
I first created a file called
Dockerfile
in the same directory. -
TODO: add a Dockerfile so we can process the requirements.txt
FROM pytorch/pytorch:1.6.0-cuda10.1-cudnn7-runtime
RUN apt-get update ##[edited]
RUN apt-get install 'ffmpeg'\
'libsm6'\
'libxext6' -y
COPY requirements.txt /
RUN pip install -r /requirements.txt
# COPY main.py /workspace/.
this starts with the pytorch container, installs some addtional libraries and then processes the requirements.txt file. This file is:
opencv-contrib-python
numpy
- This can be built from the command line or from pycharm. In pycharm,right click on the two green triangles that appear when you open the dockerfile. Select Edit Dockerfile. Give the image a name in the image tag field, I used pytest. Then click Run, which should build the docker image. If you edit requirements.txt you will need to run it again. You could copy your code into the docker as well but for now this slows down the editing since you need to always be re-building the image.
- create a sub directory called work, add main.py to it:
import torch
import cv2
import numpy as np
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
print('Using device:', device)
torch.rand(10).to(device)
a=np.random.randint(0,255,(300,300,3),dtype=np.uint8)
cv2.imshow('test',a)
cv2.waitKey()
cv2.destroyAllWindows()
This file tests pytourch is running on the gpu and that opencv can display an image. We can start the docker image from the command line to test it:
docker run -it --gpus all --ipc=host --rm --mount type=bind,source="$PWD"/work,target=/work --volume="$HOME/.Xauthority:/root/.Xauthority:rw" --env="DISPLAY" --network=host pytest:latest
cd /work
python main.py
- To run this from pycharm, in settings/project/python interpreter add a new interpeter, select Docker. You might need to create a new docker server with the new button. In the image name you should have pytest:latest as an option, select it.
- Edit the run configuration. Make sure you are using the docker image python. Edit the Docker container settings.
- Add
/root/.Xauthority
and/home/[your username]/.Xauthority
in the volume bindings - Add
DISPLAY
and:0
in the enviroment varibables - In run options add
--gpus all --ipc=host --network=host
- There is no need for the mount since pycharm does that for us.
It should now print out
cuda
and display a random patterned image. - It should be possible run this on a remote machine https://youtrack.jetbrains.com/issue/PY-33489
On my linux Mint 19.2, Docker installation step 3 failed saying "Malformed input, repository not added."
I added manually the apt repository in "additional-repositories.list" file (found on https://forums.linuxmint.com/viewtopic.php?t=300469):
And verify that "...download.docker.com/ubuntu bionic..." is updated.