Skip to content

Instantly share code, notes, and snippets.

@ahallora
Created January 31, 2024 08:51
Show Gist options
  • Save ahallora/425d8305e163dddd9e6cf5cf34ce77c1 to your computer and use it in GitHub Desktop.
Save ahallora/425d8305e163dddd9e6cf5cf34ce77c1 to your computer and use it in GitHub Desktop.
LocalAI and StableDiffusion on Ubuntu (WSL2)

Making LocalAI and StableDiffusion work on Ubuntu (via WSL2)

Prerequisites:

  • Docker running
  • nvidia-smi running

Print out the GPUs available in docker:

docker run -it --rm --gpus all ubuntu nvidia-smi -L

Installation

Install localAI with animagine-xl and NVIDIA CUDA12:

docker run -ti -p 8080:8080 -e COMPEL=0 --gpus all localai/localai:v2.7.0-cublas-cuda12 animagine-xl

When the API is running, in another terminal, call this to apply the StableDiffusion model:

curl http://localhost:8080/models/apply -H "Content-Type: application/json" -d '{
  "url": "github:go-skynet/model-gallery/stablediffusion.yaml"
}'

How to use?

While the API is running call it like you would call OpenAI, like this:

curl http://localhost:8080/v1/images/generations -H "Content-Type: application/json" -d '{
  "prompt": "A cute baby sea otter",
  "size": "256x256"
}'
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment