Skip to content

Instantly share code, notes, and snippets.

@costa
Created January 23, 2024 07:56
Show Gist options
  • Save costa/3e4c93f6b1336998763c45d33edf2484 to your computer and use it in GitHub Desktop.
Save costa/3e4c93f6b1336998763c45d33edf2484 to your computer and use it in GitHub Desktop.
# NOTE re: deployment
# - with self server (running on any VM from any provider):
# $ @@ dev docker-compose up --remove-orphans
# and then:
# $ @@ ^dev # will securely tunnel WUI (:3000) to http://localhost:16623
# - with docker-compose configured otherwise:
# # well, you must know how to run this in your environment then, see below
version: '2.1'
services:
chat:
image: localai/localai:v2.5.1-ffmpeg-core
environment:
- CORS=true
- CORS_ALLOW_ORIGINS=*
# NOTE within your custom docker-compose environment,
# you will need this port securely tunnelled to your local machine somehow:
# ports:
# - 8080:21703
restart: on-failure
chat-bootstrapper:
depends_on:
- chat
image: curlimages/curl
command:
- "http://chat:8080/models/apply"
- "-H"
- "Content-Type: application/json"
- "-d"
# NOTE you're welcome to visit https://github.com/go-skynet/model-gallery
# or discover other ways to load compatible models
- |
{
"url": "github:go-skynet/model-gallery/gpt4all-j.yaml",
"name": "gpt-3.5-turbo"
}
restart: on-failure
wui:
# NOTE using quay.io just to note the alternative
image: quay.io/go-skynet/localai-frontend:master
environment:
# NOTE this is port number auto-assigned for this comp service (chat:8080)
# by self server, so this is what we're using
- API_HOST=http://localhost:21703
- HOST=0.0.0.0
# - PORT=3000
restart: on-failure
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment