Skip to content

Instantly share code, notes, and snippets.

@neuro-sys
Last active May 17, 2024 06:47
Show Gist options
  • Save neuro-sys/271a61eb3ca0436a79c967572696f2c9 to your computer and use it in GitHub Desktop.
Save neuro-sys/271a61eb3ca0436a79c967572696f2c9 to your computer and use it in GitHub Desktop.
Dockerfile for headless-gl. You also need to run Docker container in "privileged" mode, assign X11 auth volume on the container, and possibly give Xauth access for call from outside.
# Build stage
FROM nvidia/opengl:1.0-glvnd-devel-ubuntu20.04 as builder
# Install node 14.0
RUN apt-get update -y && \
apt-get install -y curl gnupg ca-certificates && \
curl -sL https://deb.nodesource.com/setup_14.x | bash - && \
apt-get install -y nodejs
# See: https://github.com/stackgl/headless-gl#ubuntudebian
RUN apt-get update -y && \
apt-get install -y build-essential python libxi-dev libglu-dev libglew-dev pkg-config git
WORKDIR /build
COPY package*.json .
COPY my-threejs-app my-threejs-app
RUN npm install --production
COPY tsconfig.json ./
COPY src src
RUN npm run build
# Build headless-gl natively as the prebuilt binary is not compatible
# See https://github.com/stackgl/headless-gl/issues/65#issuecomment-252742795
RUN git clone https://github.com/stackgl/headless-gl.git && \
cd headless-gl && \
git submodule init && \
git submodule update && \
npm install && \
npm run rebuild
# We will later copy headless-gl/build/Release/webgl.node into runtime layer
# Runtime stage, install and copy only what's needed to run
FROM nvidia/opengl:1.0-glvnd-devel-ubuntu20.04
# Install node 14.0
RUN apt-get update -y && \
apt-get install -y curl gnupg ca-certificates && \
curl -sL https://deb.nodesource.com/setup_14.x | bash - && \
apt-get install -y nodejs
# See: https://github.com/stackgl/headless-gl#ubuntudebian
RUN apt-get update -y && \
apt-get install -y build-essential python libxi-dev libglu-dev libglew-dev pkg-config git
EXPOSE 80
WORKDIR /app
COPY fonts fonts
COPY --from=builder /build/node_modules node_modules
COPY --from=builder /build/tsconfig.json tsconfig.json
COPY --from=builder /build/dist dist
COPY --from=builder /build/package.json package.json
COPY --from=builder /build/headless-gl/build/Release/webgl.node node_modules/gl/build/Release/webgl.node
ENV DISPLAY :0
CMD [ "node", "dist/src/index.js" ]
@neuro-sys
Copy link
Author

@stepancar X is needed, the last I was using it, because webgl.node is linked against X libraries (you can check with ldd). But X itself can run headless with virtual screen, and in software rendering mode with xvfb-run. The setup is a bit too long to explain and remember from memory.

Headless-gl depends on Angle, and I think that can be linked without X (which was something I was trying to do, but gave up) using EGL, but X with virtual screen and xvfb-run is just much simpler. Note that if you want to use the GPU, then don't use xvfb-run but you still need a virtual display config for X (can't remember the details).

@stepancar
Copy link

stepancar commented Apr 4, 2023

@neuro-sys this makes more sense now! Thank you a lot!
Are you running this in a kubernetes? I'm curios how you install x-server on node pool. We have a custom daemon set, which runs x-server, but before running it it adds taint on a node in order to prevent deployments before x-server initiation.
Maybe you are using any standard daemon set for that?

@neuro-sys
Copy link
Author

@stepancar , IIRC the image used in this Dockerfile "nvidia/opengl:1.0-glvnd-devel-ubuntu20.04" comes pre-installed with X server and even the nvidia drivers, so I didn't have to care about doing myself.

@stepancar
Copy link

@neuro-sys , but how the pod communicates with a host in your case? Or you don't need it because you are running rendering without GPU?

@neuro-sys
Copy link
Author

neuro-sys commented Apr 4, 2023

@stepancar that is the part in the description:

You also need to run Docker container in "privileged" mode, assign X11 auth volume on the container, and possibly give Xauth access for call from outside.

And I don't remember it all by memory and takes long to explain. So essentially it is how and why X has a server client model. They talk via sockets (in this case by a named pipe, in /var folder, hence the volume mounting) despite being on the same machine (i.e. the host with the docker container/pod).

you don't need it because you are running rendering without GPU?

I ran it with GPU on production, software rendering for test containers. In the end we wrote a non-javascript based application to do the rendering.

@stepancar
Copy link

@neuro-sys Thank you a lot! I was able to link headless-gl with egl provided by nvidia image. I removed x-server from our host, which simplified our infrastructure

@neuro-sys
Copy link
Author

@stepancar Glad to be helpful, and good to hear you were able to use EGL. Feel free to share links or describe how you did it in case someone else finds their way here with the same need.

@stepancar
Copy link

Sure!
This is a comment with explanation stackgl/headless-gl#116 (comment)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment