Last active
September 20, 2024 14:10
-
-
Save CGamesPlay/dd4f108f27e2eec145eedf5c717318f5 to your computer and use it in GitHub Desktop.
Amazing work! The latest gpt-3.5-turbo and gpt-4-turbo add support for parallel tool calls by injecting an extra tool. Here is the namespace and description I obtained from the OpenAI's API:
(continuation from the normal functions namespace)
// namespace functions
## multi_tool_use
// This tool serves as a wrapper for utilizing multiple tools. Each tool that can be used must be specified in the tool sections. Only tools in the functions namespace are permitted.
// Ensure that the parameters provided to each tool are valid according to that tool's specification.
namespace multi_tool_use {
// Use this function to run multiple tools simultaneously, but only if they can operate in parallel. Do this even if the prompt suggests using the tools sequentially.
type parallel = (_: {
// The tools to be executed in parallel. NOTE: only functions tools are permitted
tool_uses: {
// The name of the tool to use. The format should either be just the name of the tool, or in the format namespace.function_name for plugin and function tools.
recipient_name: string,
// The parameters to pass to the tool. Ensure these are valid according to the tool's own specifications.
parameters: object,
}[],
}) => any;
I've updated the notebook to use the new tool calling interface and support the parallel tool calling option. Notable changes:
- Validations are now exposed to the model (minimum, maximum, pattern).
- Enums are no longer exposed to the model (note: it's still possible that OpenAI supports them through controlled generation, but untested)
- Type titles are now exposed to the model. If you are autogenerating the schema title from the field name, this is wasting tokens.
One interesting note is that the overhead of the parallel tool calls doesn't seem to be reflected in the prompt usage value.
https://openai.com/index/introducing-structured-outputs-in-the-api/
Any changes from this?
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
I applied your
ensure_ascii=False
change, but I disagree about yourFUNCTION_OVERHEAD
change. Instead, OpenAI appears to now be formatting integers as "1" instead of "1.0".The gist is updated with these changes (tests still pass).