Completing these steps will let you have an LLM Powered Web scraper in Home Assistant through the Model Context Protocol with an example of how you could make a template entity for extracting new headlines for a display.
This assumes you already know about the following:
- Home Assistant
- Voice Assistant with Conversation Agent (OpenAI, Google Gemini, Anthropic, etc)
- Python virtual environment for running MCP server
This guide will get you an LLM powered template enttiy using MCP to fetch external web pages:
graph LR
A["Home Assistant LLM Conversation Agent"] <--> |sse| B["mcp-proxy"]
B <--> |stdio| C["mcp-fetch MCP Server"]
C <--> |http/s| D["web pages"]
style A fill:#ffe6f9,stroke:#333,color:black,stroke-width:2px
style B fill:#e6e6ff,stroke:#333,color:black,stroke-width:2px
style C fill:#e6ffe6,stroke:#333,color:black,stroke-width:2px
style D fill:#e6ffe6,stroke:#333,color:black,stroke-width:2px
Install dependencies mcp-proxy and MCP Fetch Server.
$ uv pip install mcp-proxy mcp-server-fetch
Most MCP servers are stdio based (e.g. spawned by Claude Desktop) so we need to start a server to expose them to Home Assistant. We use mcp-proxy
which runs an SSE server, then spawns the stdio MCP server that can fetch web pages. The proxy
server spawns the command without any environment variables, so we set our path.
$ mcp-proxy --sse-port 42783 --env PATH "${PATH}" -- uv run mcp-server-fetch
...
INFO: Uvicorn running on http://127.0.0.1:42783 (Press CTRL+C to quit)
The SSE server is now exposed at http://127.0.0.1:42783/sse
. You can set flags
to change the IP and port the proxy listens on.
Important
NOTE: This integration is currently available in 2025.2.0b beta release
Manually add the integration
Set the SSE Server URL to your MCP proxy server SSE endpoint e.g. http://127.0.0.1:42783/sse
. Make sure the URL ends with /sse
.
The integration will create a new LLM API called mcp-fetch
that is available to conversation agents. It does not add any other entities or devices.
-
Navigate to your existing conversation agent integration and reconfigure it
-
Set the LLM Control to
mcp-fetch
-
Update the prompt to be something simple for now such as:
You are an agent for Home Asisstant, with access to tools through an external server. This external server enables LLMs to fetch web page contents.

Open the conversation agent and ask it to fetch a web page:


Lets now experiment to use the tool to make a sensor. We should ask the model to respond more succintly, and call it from the developer tools. Here is a yaml example:
action: conversation.process
data:
agent_id: conversation.google_generative_ai
text: >-
Please visit bbc.com and summarize the first headline. Please respond with
succinct output as the output will be used as headline for an eInk display
with limited space.
We could even improve this by giving some few-shot example headlines, however, the model follows instructions OK already and produces a headline:
response:
speech:
plain:
speech: Trump threatens Greenland and Panama Canal.
extra_data: null
card: {}
language: en
response_type: action_done
data:
targets: []
success: []
failed: []
conversation_id: 01JH2B56RMR2DABQQGAAA9X95D
Below is an example template
entity definition based on the Model Context Protocol tool calling to fetch a web page:
template:
- trigger:
- platform: time_pattern
hours: "/1"
# Used for testing
- platform: homeassistant
event: start
action:
- action: conversation.process
data:
agent_id: conversation.google_generative_ai
text: >-
Please visit bbc.com and summarize the single most important headline.
Please respond with succinct output as the output will be used as
headline for an eInk display with limited space, so answers must be
less than 200 characters.
response_variable: headline
sensor:
- name: Headline
attributes:
title: "{{ headline.response.speech.plain.speech }}"
unique_id: "d3641cdf-aa9f-4169-acae-7f7ba989c492"
unique_id: "272f0508-3e27-4179-9aca-06d8333874e7"

Now go out and make yourself an ESPHome E-ink display if you don't have one already!
Thank you. Can I use the LLM for Assist and mcp-fetch at the same time? i.e would the LLM still be able to control my home while using the mcp-fetch?