This guide walks through setting up Ollama and OpenWebUI using Docker Compose. The goal is simple: get a GPU-accelerated local LLM runtime that seamlessly interfaces with OpenWebUI to provide ChatGPT like service.
If you have docker and docker-compose installed, skip this section!
First we will Install docker and docker-compose, I am on Arch linux so I will use sudo pacman -S docker docker-compose. If you are on a different system, why? Haha seriously though try out arch sometime even just in a vm. If your not on arch, go ahead and check out a installation guide below.