Skip to content

Instantly share code, notes, and snippets.

@ms-2k
ms-2k / cuda-rocm-vulkan.Dockerfile
Last active September 19, 2025 09:33
Dockerfile for llama.cpp built with CUDA, ROCm, and Vulkan backends for RTX 3090 + AMD Instinct MI50 (or any RTX 30-series + gfx906). Requires running ROCm Offline Installer Creator first (I couldn't figure out package manager installation method without it conflicting with CUDA base images yet; still working on it)
ARG UBUNTU_VERSION=24.04
ARG CUDA_VERSION=12.8.1
# Target the CUDA build image
ARG BASE_CUDA_DEV_CONTAINER=nvidia/cuda:${CUDA_VERSION}-devel-ubuntu${UBUNTU_VERSION}
ARG BASE_CUDA_RUN_CONTAINER=nvidia/cuda:${CUDA_VERSION}-runtime-ubuntu${UBUNTU_VERSION}
# Build-time ROCm base (CUDA devel + ROCm)
FROM ${BASE_CUDA_DEV_CONTAINER} AS build-rocm-base