Skip to content

Instantly share code, notes, and snippets.

@stevenheidel
Created May 20, 2025 20:20
Show Gist options
  • Save stevenheidel/a6c8ed8ce1632dff1284e44a17554993 to your computer and use it in GitHub Desktop.
Save stevenheidel/a6c8ed8ce1632dff1284e44a17554993 to your computer and use it in GitHub Desktop.
MCP docs draft

Remote MCP

Allow models to use remote MCP servers to perform tasks.

Model Context Protocol (MCP) is an open protocol that standardizes how applications provide tools and additional context data to LLMs. The MCP tool in the Responses API allows developers to give the model access to tools hosted on remote MCP servers. Remote MCP servers expose tools for an LLM to call over a network in accordance with the MCP protocol.

The remote MCP server tool is only available using the Responses API. There is no cost associated with the tool directly, though you are still billed for output tokens from the API.

The following example uses an open (no auth) MCP server to ask a question about the MCP specification.

curl https://api.openai.com/v1/responses \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer $OPENAI_API_KEY" \
  -d '{
  "model": "gpt-4.1",
  "tools": [
    {
      "type": "mcp",
      "server_label": "mcp_about_mcp",
      "server_url": "https://gitmcp.io/modelcontextprotocol/modelcontextprotocol",
      "require_approval": "never"
    }
  ],
  "input": "What transport protocols does the 2025-03-26 version of the MCP spec support?"
}'

As with other tools, like function calls or web search, the model will decide whether or not to call a tool exposed by the remote MCP server depending on the content of the input prompt.

If the model decides to call a tool provided by an MCP server, the output array will contain an output item with type mcp_call. It contains both the arguments that were passed to the MCP server to get the response, and the output text from the MCP server.

{
  "id": "mcp_682ca7cfba548191a8d37d6cba9174c4049658ce185cf780",
  "type": "mcp_call",
  "approval_request_id": null,
  "arguments": "{\"query\":\"transport protocol\"}",
  "error": null,
  "name": "search_repo_docs",
  "output": "## Query\n\ntransport protocol.\n\n## Res... <omitted>"
}

MCP tools available to the model

When you make an API request that contains MCP tools, the API response will contain an output item of type mcp_list_tools. This will contain a list of tools that the remote MCP server made available to the model. Here is a small subset of the tools provided in the example server above.

{
  "id": "mcpl_682ca7cd7a108191a62a16b16ba23406049658ce185cf780",
  "type": "mcp_list_tools",
  "server_label": "mcp_about_mcp",
  "tools": [
    {
      "annotations": null,
      "description": "Semantically search ...<omitted>",
      "input_schema": {
        "type": "object",
        "properties": {
          "query": {
            "type": "string",
            "description": "The search query to find relevant documentation"
          }
        },
        "required": [
          "query"
        ],
        "additionalProperties": false,
        "$schema": "http://json-schema.org/draft-07/schema#"
      },
      "name": "search_repo_docs"
    },
    ... other tools ...
  ]
}

Limiting and filtering allowed tools

To limit which tools the model has access to, you can use the allowed_tools parameter to pass a list of approved tool names, or an object containing criteria for which tools are allowed.

curl https://api.openai.com/v1/responses \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer $OPENAI_API_KEY" \
  -d '{
  "model": "gpt-4.1",
  "tools": [
    {
      "type": "mcp",
      "server_label": "mcp_about_mcp",
      "server_url": "https://gitmcp.io/modelcontextprotocol/modelcontextprotocol",
      "require_approval": "never",
      "allowed_tools": ["search_repo_docs"]
    }
  ],
  "input": "What transport protocols does the 2025-03-26 version of the MCP spec support?"
}'

MCP tool call approvals

In our first example, we did not require the model to seek approval before executing an MCP server action, which is fine for low-stakes tasks like searching documentation. But for more sensitive tasks, you can require the model to get approval from your code before taking the action. The high-level flow looks like this:

  • You make an API request to /v1/responses with MCP tools that require approval
  • The API responds with output items of type mcp_approval_request that contain the tool the model wants to call, and with what arguments.
  • You make a second API request to /v1/responses passing back all the output items as input, including an input item of type mcp_approval_response with approve set to true or false.
  • The model proceeds (or not) with the MCP tool call, returning mcp_call items in the output

Let's make the same MCP request we made earlier, but set require_approval to always in the tool configuration.

curl https://api.openai.com/v1/responses \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer $OPENAI_API_KEY" \
  -d '{
  "model": "gpt-4.1",
  "tools": [
    {
      "type": "mcp",
      "server_label": "mcp_about_mcp",
      "server_url": "https://gitmcp.io/modelcontextprotocol/modelcontextprotocol",
      "require_approval": "always"
    }
  ],
  "input": "What transport protocols does the 2025-03-26 version of the MCP spec support?"
}'

In the API response's output array, we'll get an mcp_approval_request output item that looks like this:

{
  "id": "mcpr_682caf0060e481918d966992503c2b71005cccc8180c8a9e",
  "type": "mcp_approval_request",
  "arguments": "{\"query\":\"transport protocols\"}",
  "name": "search_repo_docs",
  "server_label": "mcp_about_mcp"
}

To instruct the model to go ahead with the MCP tool call, we'll need to make a second API request containing an mcp_approval_response item, in addition to the approval request item we got before (and any other relevant input items as context, like our original prompt):

curl https://api.openai.com/v1/responses \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer $OPENAI_API_KEY" \
  -d '{
  "model": "gpt-4.1",
  "tools": [
    {
      "type": "mcp",
      "server_label": "mcp_about_mcp",
      "server_url": "https://gitmcp.io/modelcontextprotocol/modelcontextprotocol",
      "require_approval": "always"
    }
  ],
  "input": [
    { 
      "type": "message",
      "role": "user", 
      "content": [{
        "type": "input_text",
        "text": "What transport protocols does the 2025-03-26 version of the MCP spec support?"
      }]
    },
    {
      "id": "mcpr_682caf0060e481918d966992503c2b71005cccc8180c8a9e",
      "type": "mcp_approval_request",
      "arguments": "{\"query\":\"transport protocols\"}",
      "name": "search_repo_docs",
      "server_label": "mcp_about_mcp"
    },
    {
      "type": "mcp_approval_response",
      "approve": true,
      "approval_request_id": "mcpr_682caf0060e481918d966992503c2b71005cccc8180c8a9e"
    }
  ]
}'

MCP tool call authentication

Remote MCP servers, particularly those that can take sensitive actions, will often require authentication from clients in order to use their tools. In your tool configuration, you can use the headers property to pass a set of key/value pairs that will be used as HTTP headers for any requests to the MCP server.

The following is an example of how you would pass authentication information to a request to Stripe's MCP server.

curl https://api.openai.com/v1/responses \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer $OPENAI_API_KEY" \
  -d '{
  "model": "gpt-4.1",
  "tools": [
    {
      "type": "mcp",
      "server_label": "stripe",
      "server_url": "https://mcp.stripe.com",
      "require_approval": "always",
      "headers": { "Authorization": "Bearer <api key goes here>" }
    }
  ],
  "input": "user_291471 used 700 minutes of storage at 30 cents/minute. Generate a payment link for this."
}'

OpenAI discards all values in the headers object, and all parts of the MCP server URL outside of the scheme, domain and subdomains, as soon as the request is complete. Headers and MCP server URL parameters will not be available on Response objects fetched via API. This also means that any authorization keys needed to make tool calls, and the server url of the MCP server, must be passed via API in every request.

Risks and safety

  • Connecting to trusted servers.

    • Pick official servers hosted by the service providers themselves (e.g. we recommend connecting to the Stripe server hosted by Stripe themselves on mcp.stripe.com, instead of a Stripe MCP server hosted by some third party).
    • Because there aren't too many official remote MCP servers today, you may be tempted to use a MCP server hosted by an organization that doesn't operate that server and simply proxies request to that service via your API. If you must do this, be extra careful in doing your due diligence on these "aggregator" MCP servers before using them.
  • Log and review data being shared with third party MCP servers.

    • Because MCP servers define their own tool definitions, they may request for data that you may not always be comfortable sharing with the host of that MCP server. Because of this, the MCP tool in the Responses API defaults to requiring approvals of each MCP tool call being made. When developing your application, be sure to review the type of data being shared with these MCP servers carefully and robustly. Once you gain confidence in your trust of this MCP server, you can skip these approvals for more performant execution.
    • We also recommend logging any data sent to MCP servers. If you're using the Responses API with store=true, these data are already logged via the API for 30 days. You may also want to log these data in your own systems and perform periodic reviews on this to ensure data is being shared per your expectations.

Implications on Zero Data Retention and Data Residency

The MCP tool is compatible with Zero Data Retention and Data Residency but it's important to note that OpenAI can only make guarantees on adhering to Zero Data Retention and Data Residency guidelines on the part of the data processing OpenAI controls. In other words, if you're an organization with Data Residency in Europe, OpenAI will ensure that all data processing and storage takes place in Europe up until the point communication or data is sent to the MCP server. It is your responsibility to ensure that the MCP server also adheres to any Zero Data Retention or Data Residency requirements you may have.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment