Skip to content

Instantly share code, notes, and snippets.

@mberman84
Last active May 14, 2024 14:50
Show Gist options
  • Save mberman84/584b470962c15930340ff49ae4e28a02 to your computer and use it in GitHub Desktop.
Save mberman84/584b470962c15930340ff49ae4e28a02 to your computer and use it in GitHub Desktop.
app.py
import autogen
config_list = [
{
'model': 'gpt-4',
'api_key': 'API_KEY'
}
]
llm_config={
"request_timeout": 600,
"seed": 42,
"config_list": config_list,
"temperature": 0
}
assistant = autogen.AssistantAgent(
name="CTO",
llm_config=llm_config,
system_message="Chief technical officer of a tech company"
)
user_proxy = autogen.UserProxyAgent(
name="user_proxy",
human_input_mode="NEVER",
max_consecutive_auto_reply=10,
is_termination_msg=lambda x: x.get("content", "").rstrip().endswith("TERMINATE"),
code_execution_config={"work_dir": "web"},
llm_config=llm_config,
system_message="""Reply TERMINATE if the task has been solved at full satisfaction.
Otherwise, reply CONTINUE, or the reason why the task is not solved yet."""
)
task = """
Write python code to output numbers 1 to 100, and then store the code in a file
"""
user_proxy.initiate_chat(
assistant,
message=task
)
task2 = """
Change the code in the file you just created to instead output numbers 1 to 200
"""
user_proxy.initiate_chat(
assistant,
message=task2
)
@manjunathshiva
Copy link

Here is the complete code. Am using text gen web UI local model not Run Pod using Mistral 7B instruct Q8 GGUF. Be sure to enable openai and gallery extensions and api and listen Under Sessions tab on Text Gen Web UI

import autogen

config_list = [
{
"base_url": "http://localhost:5000/v1",
"api_key": "NULL",
}
]

llm_config={
"timeout": 600,
"seed": 42,
"config_list": config_list,
}

USE_MEMGPT = True

assistant = autogen.AssistantAgent(
name="CTO",
llm_config=llm_config,
system_message="Chief technical officer of a tech company"
)

user_proxy = autogen.UserProxyAgent(
name="user_proxy",
human_input_mode="NEVER",
max_consecutive_auto_reply=0,
is_termination_msg=lambda x: x.get("content", "").rstrip().endswith("TERMINATE"),
code_execution_config={"work_dir": "web"},
llm_config=llm_config,
system_message="""Reply TERMINATE if the task has been solved at full satisfaction.
Otherwise, reply CONTINUE, or the reason why the task is not solved yet."""
)

task = """
Write python code to output numbers 1 to 100, and then store the code in a file
"""

user_proxy.initiate_chat(
assistant,
message=task
)

task2 = """
Change the code in the file you just created to instead output numbers 1 to 200
"""

user_proxy.initiate_chat(
assistant,
message=task2
)

@ray30087
Copy link

ray30087 commented Jan 9, 2024

I'm getting stuck on the following error:
raise self._make_status_error_from_response(err.response) from None
openai.RateLimitError: Error code: 429 - {'error': {'message': 'Rate limit reached for gpt-4 in organization org-ehVxQx3I1TFqEcMydXbUkFUz on tokens per min (TPM): Limit 10000, Used 9343, Requested 1103. Please try again in 2.676s. Visit https://platform.openai.com/account/rate-limits to learn more.', 'type': 'tokens', 'param': None, 'code': 'rate_limit_exceeded'}}
what can i do with the rate-limiting error?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment