Skip to content

Instantly share code, notes, and snippets.

@allenporter
Last active April 14, 2025 20:33

Home Assistant Model Context Protocol integration

TL;DR

Completing these steps will let you have an LLM Powered Web scraper in Home Assistant through the Model Context Protocol with an example of how you could make a template entity for extracting new headlines for a display.

Pre-requisites

This assumes you already know about the following:

  • Home Assistant
  • Voice Assistant with Conversation Agent (OpenAI, Google Gemini, Anthropic, etc)
  • Python virtual environment for running MCP server

Overview

This guide will get you an LLM powered template enttiy using MCP to fetch external web pages:

Loading
graph LR
    A["Home Assistant LLM Conversation Agent"] <--> |sse| B["mcp-proxy"]
    B <--> |stdio| C["mcp-fetch MCP Server"]
    C <--> |http/s| D["web pages"]

    style A fill:#ffe6f9,stroke:#333,color:black,stroke-width:2px
    style B fill:#e6e6ff,stroke:#333,color:black,stroke-width:2px
    style C fill:#e6ffe6,stroke:#333,color:black,stroke-width:2px
    style D fill:#e6ffe6,stroke:#333,color:black,stroke-width:2px

Install MCP Proxy & Fetch MCP Server

Install dependencies mcp-proxy and MCP Fetch Server.

$ uv pip install mcp-proxy mcp-server-fetch

Most MCP servers are stdio based (e.g. spawned by Claude Desktop) so we need to start a server to expose them to Home Assistant. We use mcp-proxy which runs an SSE server, then spawns the stdio MCP server that can fetch web pages. The proxy server spawns the command without any environment variables, so we set our path.

$ mcp-proxy --sse-port 42783 --env PATH "${PATH}" -- uv run mcp-server-fetch
...
INFO:     Uvicorn running on http://127.0.0.1:42783 (Press CTRL+C to quit)

The SSE server is now exposed at http://127.0.0.1:42783/sse. You can set flags to change the IP and port the proxy listens on.

Configure Model Context Protocol Integration

Important

NOTE: This integration is currently available in 2025.2.0b beta release

Open your Home Assistant instance and start setting up a new integration.

Manually add the integration

Set the SSE Server URL to your MCP proxy server SSE endpoint e.g. http://127.0.0.1:42783/sse. Make sure the URL ends with /sse.

The integration will create a new LLM API called mcp-fetch that is available to conversation agents. It does not add any other entities or devices.

Configure Conversation Agent

  1. Navigate to your existing conversation agent integration and reconfigure it

  2. Set the LLM Control to mcp-fetch

  3. Update the prompt to be something simple for now such as: You are an agent for Home Asisstant, with access to tools through an external server. This external server enables LLMs to fetch web page contents.

Screenshot of Conversation Agent Configuration

Try it Out

Open the conversation agent and ask it to fetch a web page:

Screenshot 2025-01-07 at 10 33 25 PM Screenshot 2025-01-07 at 10 34 10 PM

Prompt Engineering

Lets now experiment to use the tool to make a sensor. We should ask the model to respond more succintly, and call it from the developer tools. Here is a yaml example:

action: conversation.process
data:
  agent_id: conversation.google_generative_ai
  text: >-
    Please visit bbc.com and summarize the first headline. Please respond with
    succinct output as the output will be used as headline for an eInk display
    with limited space. 

We could even improve this by giving some few-shot example headlines, however, the model follows instructions OK already and produces a headline:

response:
  speech:
    plain:
      speech: Trump threatens Greenland and Panama Canal.
      extra_data: null
  card: {}
  language: en
  response_type: action_done
  data:
    targets: []
    success: []
    failed: []
conversation_id: 01JH2B56RMR2DABQQGAAA9X95D

Template Entity

Below is an example template entity definition based on the Model Context Protocol tool calling to fetch a web page:

template:
- trigger:
    - platform: time_pattern
      hours: "/1"
    # Used for testing
    - platform: homeassistant
      event: start
  action:
    - action: conversation.process
      data:
        agent_id: conversation.google_generative_ai
        text: >-
          Please visit bbc.com and summarize the single most important headline.
          Please respond with succinct output as the output will be used as
          headline for an eInk display with limited space, so answers must be
          less than 200 characters.
        response_variable: headline
  sensor:
    - name: Headline
      attributes:
        title: "{{ headline.response.speech.plain.speech }}"
      unique_id: "d3641cdf-aa9f-4169-acae-7f7ba989c492"
  unique_id: "272f0508-3e27-4179-9aca-06d8333874e7"
Screenshot of template entity output

Now go out and make yourself an ESPHome E-ink display if you don't have one already!

@nirnachmani
Copy link

Thank you. Can I use the LLM for Assist and mcp-fetch at the same time? i.e would the LLM still be able to control my home while using the mcp-fetch?

@allenporter
Copy link
Author

Not yet! Right now the only approach is to make a separate conversation entity, but in the future i'd like to make conversation entities able to support multiple LLM APIs.

@MorganBridget
Copy link

This is fantastic. I have wanted something just like this it’s like you’ve read my mind!! But yes 100% agree it would be nice to work with assist simultaneously.

@Hedda
Copy link

Hedda commented Feb 25, 2025

Right now the only approach is to make a separate conversation entity, but in the future i'd like to make conversation entities able to support multiple LLM APIs.

Any chance some is working on an Fabric MCP server or better yet some kind of “Fabric integration” (AI tooling / AI agent) for Home Assistant?

fabric (from @danielmiessler) is an open-source framework for augmenting humans using AI. It provides a modular framework for solving specific problems using a crowdsourced set of AI prompts that can be used anywhere.

Would be awesome to have integration(s) for Fabric's "Patterns" (currated crownsoured promts) as AI agents / tooling that can use Fabric pattern promts as tools can use from Home Assistant.

fabric intro video:

(Keep in mind that many of these were recorded when Fabric was Python-based, so remember to use the current install instructions)

Also check out this fabric origin story:

@Hedda
Copy link

Hedda commented Feb 25, 2025

Another related tip is get inspiration from is AgentMark which is a framework (from puzzlet-ai) centered around keeping prompts in markdown files:

It allows you to:

  • Keep prompts in markdown + jsx format
  • Import components (these can be markdown files or mdx files)
  • Add logic, loops, conditionals, etc.
  • Call any LLM provider w/ the same syntax
  • Add type safety

@Hedda
Copy link

Hedda commented Mar 9, 2025

Would it by the way be a good idea if could access the IT Automation Suggester integration via Assist conversation agent?

@Hedda
Copy link

Hedda commented Apr 14, 2025

Any thoughts on if can integrate support for Google's new "A2A" (Agent2Agent) open protocol? An open protocol enabling communication and interoperability between opaque agentic applications.

@allenporter
Copy link
Author

I was reviewing their new Agent SDK but had not yet seen the agent to agent protocol. Will take a look.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment