This guide outlines how to set up a local instance of LLVM tools, including the following components:
- Ollama locally installed with GPU acceleration for running models.
- Open-WebUI dockerized for browser-based chat interactions.
- VSCode Extension: Continue for LLM-assisted development within Visual Studio Code.
This setup requires dependencies Docker and VSCode already installed on your system.