This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| ### vLLM + LangChain/LangGraph ChatCompletion Notes | |
| 1. **Use the OpenAI Chat wrapper** – LangChain’s `ChatOpenAI` (and anything that consumes `ChatModel`/`openai.ChatCompletion` semantics) can point at an OpenAI-compatible base URL, so run your vLLM server on the standard `/v1` endpoint and pass that as `base_url` or `OPENAI_BASE_URL` when instantiating the client. Example: | |
| ```python | |
| from langchain_openai import ChatOpenAI | |
| llm = ChatOpenAI( | |
| model="chat-model-name", | |
| api_key="placeholder", # vLLM doesn’t enforce a key | |
| base_url="http://localhost:8000/v1", |