Skip to content

Instantly share code, notes, and snippets.

@CGamesPlay
Last active February 1, 2024 23:59
Show Gist options
  • Star 21 You must be signed in to star a gist
  • Fork 5 You must be signed in to fork a gist
  • Save CGamesPlay/dd4f108f27e2eec145eedf5c717318f5 to your computer and use it in GitHub Desktop.
Save CGamesPlay/dd4f108f27e2eec145eedf5c717318f5 to your computer and use it in GitHub Desktop.
Display the source blob
Display the rendered blob
Raw
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
@iwamot
Copy link

iwamot commented Aug 9, 2023

Could you please tell me about the license of this code?

@CGamesPlay
Copy link
Author

Public domain, use as you wish. Attribution appreciated but not required.

@iwamot
Copy link

iwamot commented Aug 10, 2023

I understand. Thank you for your response.

@trknhr
Copy link

trknhr commented Aug 24, 2023

Great!! How did you find out FUNCTION_OVERHEAD? I wonder where it says publicly using this prompts for the function calling

@CGamesPlay
Copy link
Author

See In [58]. The model returned it.

@trknhr
Copy link

trknhr commented Aug 24, 2023

Thanks for great help.

@msp26
Copy link

msp26 commented Sep 7, 2023

Great work

@ifsheldon
Copy link

Neat!

@jussker
Copy link

jussker commented Dec 15, 2023

Thank you very much!


a few updates:

...
# `json.dumps(o, ensure_ascii=False)` to support Japanese and Chinese.
def format_enum(schema, indent):
    return " | ".join(json.dumps(o, ensure_ascii=False) for o in schema["enum"])
...

# Up to December 16, 2023, after multiple tests, the FUNCTION_OVERHEAD should be 12.
FUNCTION_OVERHEAD = 12
...

@CGamesPlay
Copy link
Author

I applied your ensure_ascii=False change, but I disagree about your FUNCTION_OVERHEAD change. Instead, OpenAI appears to now be formatting integers as "1" instead of "1.0".

    def format_default(schema):
        v = schema["default"]
        if schema["type"] == "number":
            return f"{v:.0f}" if float(v).is_integer() else str(v)
        else:
            return str(v)

The gist is updated with these changes (tests still pass).

@eshamanideep
Copy link

eshamanideep commented Feb 1, 2024

Amazing work! The latest gpt-3.5-turbo and gpt-4-turbo add support for parallel tool calls by injecting an extra tool. Here is the namespace and description I obtained from the OpenAI's API:
(continuation from the normal functions namespace)

// namespace functions

## multi_tool_use

// This tool serves as a wrapper for utilizing multiple tools. Each tool that can be used must be specified in the tool sections. Only tools in the functions namespace are permitted.
// Ensure that the parameters provided to each tool are valid according to that tool's specification.
namespace multi_tool_use {

// Use this function to run multiple tools simultaneously, but only if they can operate in parallel. Do this even if the prompt suggests using the tools sequentially.
type parallel = (_: {
// The tools to be executed in parallel. NOTE: only functions tools are permitted
tool_uses: {
// The name of the tool to use. The format should either be just the name of the tool, or in the format namespace.function_name for plugin and function tools.
recipient_name: string,
// The parameters to pass to the tool. Ensure these are valid according to the tool's own specifications.
parameters: object,
}[],
}) => any;

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment