Skip to content

Instantly share code, notes, and snippets.

@JBGruber
Last active June 17, 2024 13:41
Show Gist options
  • Save JBGruber/73f9f49f833c6171b8607b976abc0ddc to your computer and use it in GitHub Desktop.
Save JBGruber/73f9f49f833c6171b8607b976abc0ddc to your computer and use it in GitHub Desktop.
My compose file to run ollama and ollama-webui
services:
# ollama and API
ollama:
image: ollama/ollama:latest
container_name: ollama
pull_policy: missing
tty: true
restart: unless-stopped
# Expose Ollama API outside the container stack
ports:
- 11434:11434
- 53:53
volumes:
- ollama:/root/.ollama
# GPU support (turn off by commenting with # if you don't have an nvidia gpu)
deploy:
resources:
reservations:
devices:
- driver: nvidia
count: 1
capabilities:
- gpu
# webui, nagivate to http://localhost:3000/ to use
open-webui:
image: ghcr.io/open-webui/open-webui:main
container_name: open-webui
pull_policy: missing
volumes:
- open-webui:/app/backend/data
depends_on:
- ollama
ports:
- 3000:8080
environment:
- "OLLAMA_API_BASE_URL=http://ollama:11434/api"
extra_hosts:
- host.docker.internal:host-gateway
restart: unless-stopped
volumes:
ollama: {}
open-webui: {}
@JBGruber
Copy link
Author

JBGruber commented Jan 23, 2024

Download compose file:

wget https://gist.githubusercontent.com/JBGruber/73f9f49f833c6171b8607b976abc0ddc/raw/docker-compose.yml

Download and run ollama (or comment out lines 17-24 if you do not have an Nvidia GPU):

docker-compose up -d

Update:

docker-compose pull

Note, this is the Nvidia-GPU version. Once it's running, navigate to http://localhost:3000/ (or reaplce localhost with the IP address of the computer this is running on).

@JBGruber
Copy link
Author

JBGruber commented Mar 1, 2024

More information on how to run this is now available here:

Docker + Ollama install

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment