This guide runs llama.cpp on any linux system, it uses Vulkan as it fits with my AMD GPU, but you can use any other .Dockerfile in .devops directory (i.e. cuda.Dockerfile for NVIDIA).
sudo dnf install podman podman-docker
sudo systemctl enable --now podman.socket