Skip to content

Instantly share code, notes, and snippets.

View ukVee's full-sized avatar

ukv ukVee

View GitHub Profile
@ukVee
ukVee / LocalAI-Setup.md
Last active December 19, 2025 23:08
Full Guide to setup local AI techstack on ANY machine! This stack uses Ollama and OpenWebUI. Containerized deployment with Docker and docker-compose. AMD/NVIDIA GPU Supported!

LocalAI-Containerized-Stack

This guide walks through setting up Ollama and OpenWebUI using Docker Compose. The goal is simple: get a GPU-accelerated local LLM runtime that seamlessly interfaces with OpenWebUI to provide ChatGPT like service.

If you have docker and docker-compose installed, skip this section!

Installation Instructions

First we will Install docker and docker-compose, I am on Arch linux so I will use sudo pacman -S docker docker-compose. If you are on a different system, why? Haha seriously though try out arch sometime even just in a vm. If your not on arch, go ahead and check out a installation guide below.

Windows | Ubuntu | Mint | Debian |