Skip to content

Instantly share code, notes, and snippets.

View emami-io's full-sized avatar

Hadi Emami emami-io

View GitHub Profile
@emami-io
emami-io / n8n_vLLM.md
Last active April 21, 2026 19:01
n8n + vLLM Docker setup: Fully ready setup to have your local automation system.

n8n and vLLM Docker Setup

This gist contains a Docker Compose configuration to set up and run n8n, a workflow automation tool, alongside vLLM, a high-performance inference engine for large language models. This setup allows you to create automated workflows that can leverage powerful language models.

Prerequisites

  • Docker and Docker Compose installed on your machine.
  • A Hugging Face account and an access token (if you plan to use models from Hugging Face).
  • NVIDIA GPU (optional, but recommended for vLLM performance).

Setup Instructions

  1. Create Environment Variables Create a .env file in the root directory of the project and add your Hugging Face token: