Skip to content

Instantly share code, notes, and snippets.

@mberman84
Last active May 14, 2024 14:50
Show Gist options
  • Save mberman84/584b470962c15930340ff49ae4e28a02 to your computer and use it in GitHub Desktop.
Save mberman84/584b470962c15930340ff49ae4e28a02 to your computer and use it in GitHub Desktop.
app.py
import autogen
config_list = [
{
'model': 'gpt-4',
'api_key': 'API_KEY'
}
]
llm_config={
"request_timeout": 600,
"seed": 42,
"config_list": config_list,
"temperature": 0
}
assistant = autogen.AssistantAgent(
name="CTO",
llm_config=llm_config,
system_message="Chief technical officer of a tech company"
)
user_proxy = autogen.UserProxyAgent(
name="user_proxy",
human_input_mode="NEVER",
max_consecutive_auto_reply=10,
is_termination_msg=lambda x: x.get("content", "").rstrip().endswith("TERMINATE"),
code_execution_config={"work_dir": "web"},
llm_config=llm_config,
system_message="""Reply TERMINATE if the task has been solved at full satisfaction.
Otherwise, reply CONTINUE, or the reason why the task is not solved yet."""
)
task = """
Write python code to output numbers 1 to 100, and then store the code in a file
"""
user_proxy.initiate_chat(
assistant,
message=task
)
task2 = """
Change the code in the file you just created to instead output numbers 1 to 200
"""
user_proxy.initiate_chat(
assistant,
message=task2
)
@giammy677dev
Copy link

Me too keep having endless loop :( Anyone solved?

@lytworkshift
Copy link

I'm getting error, please advise: Traceback (most recent call last): File "c:\Users\GuestUser\vsCode\autogen\app.py", line 17, in assistant = autogen.AssistantAgent( ^^^^^^^^^^^^^^^^^^^^^^ AttributeError: module 'autogen' has no attribute 'AssistantAgent'

try
python -m pip install autogen

@fenixlam
Copy link

fenixlam commented Nov 2, 2023

assistant = autogen.AssistantAgent(
AttributeError: partially initialized module 'autogen' has no attribute 'AssistantAgent' (most likely due to a circular import)

My python environments has only pyautogen==0.1.14, no autogen But this error still appear.
The funny thing is, I have a ubuntu and a win11, this error does not appear in win11, only appear in ubuntu.

@jez120
Copy link

jez120 commented Nov 2, 2023

According to BARD AI to fix the loop you need to: "Another way to fix the problem is to change the code so that the user proxy will only automatically reply to messages that are not completed. You can do this by changing the max_consecutive_auto_reply setting to 0. This will ensure that the user proxy will only automatically reply to the first message in a conversation." - tested stopped looping

@tzack1
Copy link

tzack1 commented Nov 20, 2023

I am using assistants to calculate today's date based on your video. But it gives out this response : As an AI, I don't have real-time capabilities or access to current data, including today's date or the latest stock market figures. To find out today's date, please check your computer or smartphone's clock.

image
image

@xeniode
Copy link

xeniode commented Nov 26, 2023

Anyone trying this on Ubuntu and getting various errors. I have it working. First make sure to use "use_docker" as adnansalamat mentioned. Make sure you have the python:3 image pulled from Docker registry. Also, if you are using Docker Desktop, use the locate command to search for your socket. If it is outside of /var/run you need to make a link from the location to /var/run/docker.sock

@jneb802
Copy link

jneb802 commented Nov 28, 2023

I'm getting this error: TypeError: Completions.create() got an unexpected keyword argument 'request_timeout'

Anyone experience this?

Using pyautogen on python 3.11.6 in a virtual environment.

@jneb802
Copy link

jneb802 commented Nov 28, 2023

I'm getting this error: TypeError: Completions.create() got an unexpected keyword argument 'request_timeout'

Anyone experience this?

Using pyautogen on python 3.11.6 in a virtual environment.

Found the solution here
Change...

llm_config={
    "request_timeout": 600,

to...

 llm_config = {
        "timeout": 600,

@MarkB122
Copy link

MarkB122 commented Dec 3, 2023

  1. autogen.AssistantAgent error fix:
    pip uninstall autogen pyautogen
    pip install pyautogen
  2. request_timeout issue:
    "timeout": 600
  3. Infinte loop issue:
    in the user_proxy set the max_consecutive_auto_reply=0
  4. Stopping a session
    ctrl + c

@Trundicho
Copy link

For me it seems when ever I get a ''' in the LLM response autogen stops with "Process finished with exit code 0"

Example LLM Output:
`CTO (to user_proxy):

def get_numbers():
    return list(range(1, 201))
with open("generated.py", "w") as f:
    f.write('''print(*get_numbers())''')

Process finished with exit code 0`

@manjunathshiva
Copy link

Here is the complete code. Am using text gen web UI local model not Run Pod using Mistral 7B instruct Q8 GGUF. Be sure to enable openai and gallery extensions and api and listen Under Sessions tab on Text Gen Web UI

import autogen

config_list = [
{
"base_url": "http://localhost:5000/v1",
"api_key": "NULL",
}
]

llm_config={
"timeout": 600,
"seed": 42,
"config_list": config_list,
}

USE_MEMGPT = True

assistant = autogen.AssistantAgent(
name="CTO",
llm_config=llm_config,
system_message="Chief technical officer of a tech company"
)

user_proxy = autogen.UserProxyAgent(
name="user_proxy",
human_input_mode="NEVER",
max_consecutive_auto_reply=0,
is_termination_msg=lambda x: x.get("content", "").rstrip().endswith("TERMINATE"),
code_execution_config={"work_dir": "web"},
llm_config=llm_config,
system_message="""Reply TERMINATE if the task has been solved at full satisfaction.
Otherwise, reply CONTINUE, or the reason why the task is not solved yet."""
)

task = """
Write python code to output numbers 1 to 100, and then store the code in a file
"""

user_proxy.initiate_chat(
assistant,
message=task
)

task2 = """
Change the code in the file you just created to instead output numbers 1 to 200
"""

user_proxy.initiate_chat(
assistant,
message=task2
)

@ray30087
Copy link

ray30087 commented Jan 9, 2024

I'm getting stuck on the following error:
raise self._make_status_error_from_response(err.response) from None
openai.RateLimitError: Error code: 429 - {'error': {'message': 'Rate limit reached for gpt-4 in organization org-ehVxQx3I1TFqEcMydXbUkFUz on tokens per min (TPM): Limit 10000, Used 9343, Requested 1103. Please try again in 2.676s. Visit https://platform.openai.com/account/rate-limits to learn more.', 'type': 'tokens', 'param': None, 'code': 'rate_limit_exceeded'}}
what can i do with the rate-limiting error?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment