Skip to content

Instantly share code, notes, and snippets.

Show Gist options
  • Save mjm522/649100ecb6eb58c9cd457ee1c52a9575 to your computer and use it in GitHub Desktop.
Save mjm522/649100ecb6eb58c9cd457ee1c52a9575 to your computer and use it in GitHub Desktop.
Setting up nvdia docker with ros melodic with conda to use hardware acceleration

Steps

  1. Install docker using instructions here. This installation instructions were tested on Ubuntu 18.04 with Docker version 19.03.3, build a872fc2f86.

  2. To run docker as a non-root user, follow the post installation instruction here.

  3. Install nvidia toolkit:Reference

$ sudo apt-get update && sudo apt-get install -y nvidia-container-toolkit
$ sudo systemctl restart docker
  1. Install nvidia-docker2: Reference](https://github.com/nvidia/nvidia-docker/wiki/Installation-(version-2.0))
$ sudo apt-get install nvidia-docker2
$ sudo pkill -SIGHUP dockerd
  1. Once the docker is installed there might be problem with using the sound card for the gazebo. To fix the possible errors (Refer error 1), add the sound devices to the group.

$sudo usermod -aG audio $USER

  1. Clone this repository and cd to the directory and run $./install.sh

  2. The above instruction will automatically login you to the container.

  3. Once exited a new instance can be started by typing $rosdocker in the terminal. And to enter an already existing docker container run $newdockterm.

Possible Errors

  1. AL lib: (WW) alc_initconfig: Failed to initialize backend "pulse"
    ALSA lib confmisc.c:768:(parse_card) cannot find card '0'
    

This error occurs when trying to open gazebo. The issue is because of not able to access sound card. It can be fixed by $sudo usermod -aG audio $USER in the terminal before running the script. Reference

  1. libGL error: No matching fbConfigs or visuals found
    libGL error: failed to load driver: swrast
    X Error of failed request:  GLXBadContext
    

The error occurs due to issues with not able to use nvidia drivers. The solution is to install nvidia-toolkit and nvidia-docker2 by following the next two instructions.

 $sudo apt-get install nvidia-docker2
 $sudo pkill -SIGHUP dockerd
FROM osrf/ros:melodic-desktop-full
#ARG DEBIAN_FRONTEND=noninteractive
ENV DEBIAN_FRONTEND noninteractive
ARG conda_env=lfm
ENV PATH /opt/conda/bin:$PATH
ENV CONDA_DEFAULT_ENV $conda_env
# nvidia-container-runtime
ENV NVIDIA_VISIBLE_DEVICES \
${NVIDIA_VISIBLE_DEVICES:-all}
ENV NVIDIA_DRIVER_CAPABILITIES \
${NVIDIA_DRIVER_CAPABILITIES:+$NVIDIA_DRIVER_CAPABILITIES,}graphics
RUN apt-get update \
&& apt-get install -y wget \
&& rm -rf /var/lib/apt/lists/*
#some dependencies
RUN apt-get update && apt-get install -y --no-install-recommends apt-utils nano
# installs avahi for local network discovery (required for using the real robot)
RUN apt-get install -y avahi-daemon avahi-utils
# Python binary dependencies, developer tools
RUN apt-get -qq install build-essential make gcc zlib1g-dev git python3 python3-dev python3-pip
# Upgrade pip3 itself
#RUN pip3 install --upgrade pip
#more packages
RUN pip3 install setuptools
#RUN pip3 install distribute
# Get installation file
RUN wget --quiet https://repo.anaconda.com/archive/Anaconda3-2019.07-Linux-x86_64.sh -O ~/anaconda.sh
# Install anaconda at /opt/conda
RUN /bin/bash ~/anaconda.sh -b -p "/opt/conda"
# Remove installation file
RUN rm ~/anaconda.sh
# Make conda command available to all users
RUN ln -s /opt/conda/etc/profile.d/conda.sh /etc/profile.d/conda.sh
# Activate conda environment with interactive bash session
RUN echo ". /opt/conda/etc/profile.d/conda.sh" >> ~/.bashrc
#!/bin/bash
ROOT_DIR="$(cd $( dirname ${BASH_SOURCE[0]} ) && pwd)"
echo "Root Directory is: ${ROOT_DIR}"
FILE=~/.bashrc
ALIAS="alias rosdocker"
FUNCTION="function newdockterm"
IMAGE_TAG=melodic
docker build ${DOCKER_FILE_PATH} -t dev:${DOCKER_FILE_PATH##*/} $IMAGE_TAG
source ~/.bashrc
./run_image.sh dev:$IMAGE_TAG
sudo usermod -aG audio $USER
DOCKER_IMAGE=$1
WORK_DIR="${HOME}/Projects/"
if [ -z "$DOCKER_IMAGE" ]
then
echo "usage: ./bash.sh <docker-image-tag>"
echo "example: ./bash.sh dev:indigo-cuda"
echo "to list built docker images run: docker images"
exit 1
fi
XAUTH=/tmp/.docker.xauth
if [ ! -f $XAUTH ]
then
xauth_list=$(xauth nlist :0 | sed -e 's/^..../ffff/')
if [ ! -z "$xauth_list" ]
then
echo $xauth_list | xauth -f $XAUTH nmerge -
else
touch $XAUTH
fi
chmod a+r $XAUTH
fi
docker run -it \
--user=$(id -u) \
--env="DISPLAY=$DISPLAY" \
--env="PULSE_SERVER=unix:${XDG_RUNTIME_DIR}/pulse/native" \
--env="QT_X11_NO_MITSHM=1" \
--env="XAUTHORITY=$XAUTH" \
--group-add="$(getent group audio | cut -d: -f3)"\
--device="/dev/snd" \
--network="host" \
--privileged \
--volume="${XDG_RUNTIME_DIR}/pulse/native:${XDG_RUNTIME_DIR}/pulse/native" \
--volume="/tmp/.X11-unix:/tmp/.X11-unix:rw" \
--volume="$XAUTH:$XAUTH" \
--volume="${ROOT_DIR}/conda-environment" \
--volume="${ROOT_DIR}/avahi-configs:/etc/avahi" \
--volume="/home/$USER:/home/$USER" \
--volume="/etc/group:/etc/group:ro" \
--volume="/etc/passwd:/etc/passwd:ro" \
--volume="/etc/shadow:/etc/shadow:ro" \
--volume="/etc/sudoers.d:/etc/sudoers.d:ro" \
--workdir="/home/$USER/Projects" \
--runtime=nvidia \
$DOCKER_IMAGE \
bash
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment