This gist contains a Docker Compose configuration to set up and run n8n, a workflow automation tool, alongside vLLM, a high-performance inference engine for large language models. This setup allows you to create automated workflows that can leverage powerful language models.
- Docker and Docker Compose installed on your machine.
- A Hugging Face account and an access token (if you plan to use models from Hugging Face).
- NVIDIA GPU (optional, but recommended for vLLM performance).
- Create Environment Variables
Create a
.envfile in the root directory of the project and add your Hugging Face token: