Skip to content

Instantly share code, notes, and snippets.

@philipp-meier
Last active August 10, 2024 18:58
Show Gist options
  • Save philipp-meier/678a4679d0895276f270fac4c046ad14 to your computer and use it in GitHub Desktop.
Save philipp-meier/678a4679d0895276f270fac4c046ad14 to your computer and use it in GitHub Desktop.
OpenAI function calling example

OpenAI function calling example

Example curl requests for OpenAI's function calling API using a local Ollama docker container with the llama3.1 model.

Step 1:
Ask the LLM a question and provide the function it can call to gather the required information.
In this case, we want to know the current weather and provide the get_current_weather function definition.

Note: If you use a local Ollama instance, you can remove the "Authorization"-header.

Request:

curl http://localhost:11434/v1/chat/completions \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer $OPENAI_API_KEY" \
  -d '{
    "model": "llama3.1",
    "messages": [
      {
        "role": "system",
        "content": "You are a helpful assistant."
      },
      {
        "role": "user",
        "content": "How is the current weather in Graz?"
      }
    ],
    "parallel_tool_calls": false,
    "tools": [
    {
      "type": "function",
      "function": {
        "name": "get_current_weather",
        "description": "Get the current weather",
        "parameters": {
          "type": "object",
          "properties": {
            "location": {
                "type": "string",
                "description": "The city and country, eg. San Francisco, USA"
            },
            "format": { "type": "string", "enum": ["celsius", "fahrenheit"] }
          },
          "required": ["location", "format"]
        }
      }
    }
   ]
}'

Response:
Llama3.1 responds by requesting the result of the get_current_weather function:

{
  "id": "chatcmpl-202",
  "object": "chat.completion",
  "created": 1723311089,
  "model": "llama3.1",
  "system_fingerprint": "fp_ollama",
  "choices": [
    {
      "index": 0,
      "message": {
        "role": "assistant",
        "content": "",
        "tool_calls": [
          {
            "id": "call_fu6au6uj",
            "type": "function",
            "function": {
              "name": "get_current_weather",
              "arguments": "{\"format\":\"fahrenheit\",\"location\":\"Graz, Austria\"}"
            }
          }
        ]
      },
      "finish_reason": "tool_calls"
    }
  ],
  "usage": {
    "prompt_tokens": 184,
    "completion_tokens": 28,
    "total_tokens": 212
  }
}

Step 2:
We can now call our local function get_current_weather and return the result (Graz 30 °C) to the LLM by responding like this:

Request:

curl http://localhost:11434/v1/chat/completions \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer $OPENAI_API_KEY" \
  -d '{
    "model": "llama3.1",
    "messages": [
        {
            "role": "system",
            "content": "You are a helpful assistant."
        },
        {
            "role": "user",
            "content": "How is the current weather?"
        },
        {
            "role": "assistant",
            "content": "",
            "tool_calls": [
                {
                    "id": "call_fu6au6uj",
                    "type": "function",
                    "function": {
                        "name": "get_current_weather",
                        "arguments": "{\"format\":\"fahrenheit\",\"location\":\"Graz, Austria\"}"
                    }
                }
            ]
        },
        {
            "role": "tool",
            "content": "{\"city\":\"Graz\",\"weather\":\"30°C\"}",
            "tool_call_id": "call_fu6au6uj"
        }
    ],
    "parallel_tool_calls": false,
    "tools": [
    {
      "type": "function",
      "function": {
        "name": "get_current_weather",
        "description": "Get the current weather",
        "parameters": {
          "type": "object",
          "properties": {
            "location": {
                "type": "string",
                "description": "The city and country, eg. San Francisco, USA"
            },
            "format": { "type": "string", "enum": ["celsius", "fahrenheit"] }
          },
          "required": ["location", "format"]
        }
      }
    }
   ]
}'

Response:
The LLM now returns the answer with the result of the get_current_weather function:

{
    "id": "chatcmpl-371",
    "object": "chat.completion",
    "created": 1723315198,
    "model": "llama3.1",
    "system_fingerprint": "fp_ollama",
    "choices": [
        {
            "index": 0,
            "message": {
                "role": "assistant",
                "content": "The current temperature in Graz is 30°C (86°F), which assumes it's clear and sunny today."
            },
            "finish_reason": "stop"
        }
    ],
    "usage": {
        "prompt_tokens": 81,
        "completion_tokens": 23,
        "total_tokens": 104
    }
}

You can also try this yourself by only executing the request in step 2, since it contains the whole chat history.

Useful resources:

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment