Skip to content

Instantly share code, notes, and snippets.

@mwufi
Last active February 18, 2025 11:33
Show Gist options
  • Save mwufi/6718b30761cd109f9aff04c5144eb885 to your computer and use it in GitHub Desktop.
Save mwufi/6718b30761cd109f9aff04c5144eb885 to your computer and use it in GitHub Desktop.
Install Docker in Google Colab!
# First let's update all the packages to the latest ones with the following command
sudo apt update -qq
# Now we want to install some prerequisite packages which will let us use HTTPS over apt
sudo apt install apt-transport-https ca-certificates curl software-properties-common -qq
# After that we will add the GPG key for the official Docker repository to the system
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
# We will add the Docker repository to our APT sources
sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu bionic stable"
# Next let's update the package database with our newly added Docker package repo
sudo apt update -qq
# Finally lets install docker with the below command
sudo apt install docker-ce
# Lets check that docker is running
docker
# Originally, we did the following: (but doesn't work in Colab...)
# sudo systemctl status docker
# The output should be similar to this snippet below
# ● docker.service - Docker Application Container Engine
# Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset: enabled)
# Active: active (running) since Tue 2019-01-01 19:22:114 UTC; 1min 25s ago
# Docs: https://docs.docker.com
# Main PID: 10096 (dockerd)
# Tasks: 16
# CGroup: /system.slice/docker.service
# ├─10096 /usr/bin/dockerd -H fd://
# └─10113 docker-containerd --config /var/run/docker/containerd/containerd.toml
# And now that everything is good, you should be able to do:
# docker run -it -p 8888:8888 gcr.io/tensorflow/tensorflow
@vishakha-gautam11041997
Copy link

vishakha-gautam11041997 commented Aug 25, 2024

Hi, I found a solution for this. Check out https://github.com/indigo-dc/udocker. It's working perfectly in the free version of Google Colab.

  • Installation:
%%shell
pip install udocker
udocker --allow-root install
  • Sample usage:
udocker --allow-root run -p 127.0.0.1:8081:8081 -v -e TELEGRAM_API_ID=#### -e TELEGRAM_API_HASH=#### -e TELEGRAM_LOCAL=1 aiogram/telegram-bot-api:latest

Cheers!

Hi, Thanks for the help.
It works also. I was wondering how you can copy the folder from your server [xxx.xxx.xx.xx] to this container?

@RCgit123
Copy link

Thank you it 100% works 😊

@HomeDev68
Copy link

HomeDev68 commented Oct 21, 2024

how to run it as a daemon (no --daemon option)?

solution: !nohup udocker &

use this to detach the process

!(nohup udocker &)

@HomeDev68
Copy link

For Colab users, use this for installation:

#@title Docker for Colab using udocker 
%%shell
pip install udocker
udocker --allow-root install
(nohup udocker &) #@markdown RUN IN BACKGROUND AS A DETACHED PROCESS

and this for easy usage:

#@title Easy Command Usage
args = "" # @param {"type":"string","placeholder":"commands"}
args = args.strip(" ")
if args:
  !udocker --allow-root {args}
else:
  !udocker --allow-root --help

@cdghhhiilnnotu
Copy link

I think this would work.

Install the colab-xterm package, which allows us to use a terminal within our Colab notebook.

!pip install colab-xterm

%load_ext colabxterm

Open a terminal interface within your notebook, allowing you to run shell commands.

%xterm

Then, run the following commands in the terminal that appears:

sudo apt update -qq

sudo apt install apt-transport-https ca-certificates curl software-properties-common -qq

curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -

sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu bionic stable"

sudo apt update -qq

sudo apt install docker-ce

docker

@mohilmakwana3107
Copy link

I am facing following error for given command :
!udocker --allow-root run --name es01 --net elastic -p 9200:9200 -it -m 1GB docker.elastic.co/elasticsearch/elasticsearch:8.17.0

Error: manifest not found or not authorized
Error: no files downloaded
Error: image or container not available

@SomeBottle
Copy link

As a possible alternative, Apptainer works on Google Colab.

You can follow the instruction in the document to install it: https://apptainer.org/docs/admin/latest/installation.html#install-ubuntu-packages

It is possible to convert a docker image to .sif file for Apptainer:

!apptainer build hello.sif docker://hello-world

However, it is worth noting that Google Colab has some restrictions on root user's capabilities, so you must gain full capabilities in a new user namespace created by unshare -r before running Apptainer as root:

!unshare -r apptainer run hello.sif

or just run as a regular user:

# The user in the container will be somebottle
!sudo -u somebottle apptainer run hello.sif

# The user in the container will be root
!sudo -u somebottle apptainer run --fakeroot hello.sif

Additionally, if you want to run a CUDA application inside a container, you need to use flag --nv in the command line, according to https://apptainer.org/docs/user/1.3/gpu.html#requirements

# execute 'nvidia-smi' inside a newly created container
!unshare -r apptainer exec --nv pytorch-gpu.sif nvidia-smi

You may run into a problem like this:

problem_emergence-2025-01-21

That's because Apptainer depends on ldconfig -p to find shared libraries, but NVIDIA libraries haven't been added to it on Google Colab.

Follow these steps to fix it:

  1. Check the LD_LIBRARY_PATH to find nvidia library path:

    !unshare -r env | grep LD_
    # >> LD_LIBRARY_PATH=/usr/lib64-nvidia
  2. Write the nvidia library path into /etc/ld.so.conf.d/:

    !echo "/usr/lib64-nvidia" >> /etc/ld.so.conf.d/nvidia.conf
  3. Refresh library cache:

    !ldconfig

Now you'll be able to find nvidia library files in ldconfig -p, and can execute nvidia-smi command normally:

nvidia-smi_success-2025-01-22


I hope this helps!

Original post: https://github.com/cat-note/bottleofcat/blob/main/Containerization/GPUApptainerOnGoogleColab.md

@musahi0128
Copy link

I am facing following error for given command: !udocker --allow-root run --name es01 --net elastic -p 9200:9200 -it -m 1GB docker.elastic.co/elasticsearch/elasticsearch:8.17.0

Error: manifest not found or not authorized
Error: no files downloaded
Error: image or container not available

I was testing with this, it correctly pulls the image and run the container, but it refuses to go on because it is detecting it was run by root
!udocker --allow-root run --name=es01 --publish=9200:9200 docker.elastic.co/elasticsearch/elasticsearch:8.17.0

Creating a normal user and run the same command without the --allow-root part, it goes further but the process died with this last message:

{"@timestamp":"2025-01-28T20:45:11.173Z", "log.level": "INFO", "message":"Native controller process has stopped - no new native processes can be started", "ecs.version": "1.2.0","service.name":"ES_ECS","event.dataset":"elasticsearch.server","process.thread.name":"ml-cpp-log-tail-thread","log.logger":"org.elasticsearch.xpack.ml.process.NativeController","elasticsearch.node.name":"df2c331b3b64","elasticsearch.cluster.name":"docker-cluster"}

So, I would say, don't bother.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment