Skip to content

Instantly share code, notes, and snippets.

@mberman84
Created October 26, 2023 19:03
Show Gist options
  • Star 30 You must be signed in to star a gist
  • Fork 25 You must be signed in to fork a gist
  • Save mberman84/c95d69263d9c2ceb7d56cf336f13ae02 to your computer and use it in GitHub Desktop.
Save mberman84/c95d69263d9c2ceb7d56cf336f13ae02 to your computer and use it in GitHub Desktop.
AutoGen + MemGPT Code
import os
import autogen
import memgpt.autogen.memgpt_agent as memgpt_autogen
import memgpt.autogen.interface as autogen_interface
import memgpt.agent as agent
import memgpt.system as system
import memgpt.utils as utils
import memgpt.presets as presets
import memgpt.constants as constants
import memgpt.personas.personas as personas
import memgpt.humans.humans as humans
from memgpt.persistence_manager import InMemoryStateManager, InMemoryStateManagerWithPreloadedArchivalMemory, InMemoryStateManagerWithFaiss
import openai
openai.api_key = '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'
config_list = [
{
'model': 'gpt-4'
},
]
llm_config = {"config_list": config_list, "seed": 42}
user_proxy = autogen.UserProxyAgent(
name="User_proxy",
system_message="A human admin.",
code_execution_config={"last_n_messages": 2, "work_dir": "groupchat"},
)
interface = autogen_interface.AutoGenInterface() # how MemGPT talks to AutoGen
persistence_manager = InMemoryStateManager()
persona = "I\'m a 10x engineer at a FAANG tech company."
human = "I\'m a team manager at a FAANG tech company."
memgpt_agent = presets.use_preset(presets.DEFAULT, 'gpt-4', persona, human, interface, persistence_manager)
# MemGPT coder
coder = memgpt_autogen.MemGPTAgent(
name="MemGPT_coder",
agent=memgpt_agent,
)
# non-MemGPT PM
pm = autogen.AssistantAgent(
name="Product_manager",
system_message="Creative in software product ideas.",
llm_config=llm_config,
)
groupchat = autogen.GroupChat(agents=[user_proxy, coder, pm], messages=[], max_round=12)
manager = autogen.GroupChatManager(groupchat=groupchat, llm_config=llm_config)
user_proxy.initiate_chat(manager, message="First send the message 'Let's go Mario!'")
@zorgy28
Copy link

zorgy28 commented Nov 5, 2023

I solved the issue DEFAULT like that :

interface = autogen_interface.AutoGenInterface() # how MemGPT talks to AutoGen
persistence_manager = InMemoryStateManager()
persona = "I am an engineer at Att3st tech company."
human = "I am a team manager at Att3st tech company."
agent_config = llm_config
memgpt_agent = presets.use_preset(presets.DEFAULT_PRESET, model='gpt-4', persona=persona, human=human, interface=interface, persistence_manager=persistence_manager,agent_config=agent_config)

Adding this :

Resolution
I ultimately resolved the issue by modifying the file memgpt/autogen/memgpt_agent.py within my local environment. Specifically, I altered the _generate_reply_for_user_message() function to be asynchronous (async def _generate_reply_for_user_message()). Following this adjustment, the example code operated correctly.

And all works

@Phychotk66
Copy link

import demjson3 as demjson

ModuleNotFoundError: No module named 'demjson3'
I have install the packages and import but showing this. how can i fix this

@MikeyBeez
Copy link

pip install demjson3

@PinkTreee
Copy link

I constantly get the attribute error of module autogen has no attribute to user proxy agent. I have done everything. even changed my model to get 3.5 turbo but it still stands. solutions? Im new to coding and this is extremely confusing.

@MikeyBeez
Copy link

I discovered that one need's to clone the mem-gpt code and pip install -r requirements-local.txt. It downgraded transformers. Now I'm getting a different error. I get TypeError: 'LocalStateManager' object is not subscriptable when I enter memgpt run

@Phychotk66
Copy link

pip install demjson3

I have all ready install and import demjson3 module. nut nothing work showing ModuleNotFoundError: No module named 'demjson3'

@MikeyBeez
Copy link

@Phychotk66 Did you do a pip list? My version is 3.0.6.

@MikeyBeez
Copy link

@Phychotk66 Did you activate the environment?

@MikeyBeez
Copy link

@Phychotk66 BYW, memGPT with local models seems incredibly buggy. Fix one error, and you'll get another. This is starting to look like shelfware.

@Phychotk66
Copy link

MikeyBeez
File "C:\Users\Perfect\Autogen_yt\app.py", line 33, in
memgpt_agent = presets.use_preset(presets.DEFAULT, 'gpt-4', persona, human, interface, persistence_manager)
^^^^^^^^^^^^^^^
AttributeError: module 'memgpt.presets' has no attribute 'DEFAULT'

memgpt_agent = presets.use_preset(presets.DEFAULT, 'gpt-4', persona, human, interface, persistence_manager)

memgpt_agent = presets.use_preset('gpt-4-turbo', 'gpt-4', persona, human, interface, persistence_manager)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
TypeError: use_preset() missing 1 required positional argument: 'persistence_manager'
memgpt_agent = presets.use_preset('gpt-4-turbo', 'gpt-4', persona, human, interface, persistence_manager)

@Phychotk66
Copy link

MikeyBeez
File "C:\Users\Perfect\Autogen_yt\app.py", line 33, in
memgpt_agent = presets.use_preset(presets.DEFAULT, 'gpt-4', persona, human, interface, persistence_manager)
^^^^^^^^^^^^^^^
AttributeError: module 'memgpt.presets' has no attribute 'DEFAULT'

memgpt_agent = presets.use_preset(presets.DEFAULT, 'gpt-4', persona, human, interface, persistence_manager)

memgpt_agent = presets.use_preset('gpt-4-turbo', 'gpt-4', persona, human, interface, persistence_manager) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ TypeError: use_preset() missing 1 required positional argument: 'persistence_manager' memgpt_agent = presets.use_preset('gpt-4-turbo', 'gpt-4', persona, human, interface, persistence_manager)

Thank you sir problems are solved

@tjthejuggler
Copy link

I solved the issue DEFAULT like that :

interface = autogen_interface.AutoGenInterface() # how MemGPT talks to AutoGen
persistence_manager = InMemoryStateManager()
persona = "I am an engineer at Att3st tech company."
human = "I am a team manager at Att3st tech company."
agent_config = llm_config
memgpt_agent = presets.use_preset(presets.DEFAULT_PRESET, model='gpt-4', persona=persona, human=human, interface=interface, persistence_manager=persistence_manager,agent_config=agent_config)

But Still blocked on the :TypeError: cannot unpack non-iterable coroutine object error

Does anyone have a solution to this?

@zorgy28
Copy link

zorgy28 commented Nov 6, 2023

Yes, modify as mentionned here:
Resolution
I ultimately resolved the issue by modifying the file memgpt/autogen/memgpt_agent.py within my local environment. Specifically, I altered the _generate_reply_for_user_message() function to be asynchronous (async def _generate_reply_for_user_message()). Following this adjustment, the example code operated correctly.

@tjthejuggler
Copy link

Yes, modify as mentionned here: Resolution I ultimately resolved the issue by modifying the file memgpt/autogen/memgpt_agent.py within my local environment. Specifically, I altered the _generate_reply_for_user_message() function to be asynchronous (async def _generate_reply_for_user_message()). Following this adjustment, the example code operated correctly.

Thanks so much! This worked for me. You're a rock star!

@Sichtermann
Copy link

import os
import autogen
import memgpt.autogen.memgpt_agent as memgpt_autogen
import memgpt.autogen.interface as autogen_interface 
import memgpt.agent as agent
import memgpt.system as system
import memgpt.utils as utils
import memgpt.presets as presets
import memgpt.constants as constants
import memgpt.personas.personas as personas
import memgpt.humans.humans as humans
from memgpt.persistence_manager import InMemoryStateManager, InMemoryStateManagerWithPreloadedArchivalMemory, InMemoryStateManagerWithFaiss
import openai
openai.api_key = 'sk-asdasdasdasdasd'

config_list = [
    {
        'model': 'gpt-4'
    },
]

llm_config = {"config_list": config_list, "seed": 42}
user_proxy = memgpt_autogen.UserProxyAgent(
    name="User_proxy",
    system_message="A human admin.",
    code_execution_config={"last_n_messages": 2, "work_dir": "groupchat"},
)

interface = autogen_interface.AutoGenInterface() # how MemGPT talks to AutoGen
persistence_manager = InMemoryStateManager()
persona = "I\'m a 10x engineer at a FAANG tech company."
human = "I\'m a team manager at a FAANG tech company."
memgpt_agent = presets.use_preset(presets.SYNC_CHAT, None, 'gpt-4', persona, human, interface, persistence_manager)


# MemGPT coder
coder = memgpt_autogen.MemGPTAgent(
    name="MemGPT_coder",
    agent=memgpt_agent,
)

# non-MemGPT PM
pm = autogen.agentchat.AssistantAgent(
    name="Product_manager",
    system_message="Creative in software product ideas.",
    llm_config=llm_config,
)

groupchat = autogen.agentchat.GroupChat(agents=[user_proxy, coder, pm], messages=[], max_round=12)
manager = autogen.agentchat.GroupChatManager(groupchat=groupchat, llm_config=llm_config)

user_proxy.initiate_chat(manager, message="First send the message 'Let's go Mario!'")

Had to change some stuff.

@twerpyfie
Copy link

Was there an update or something?

import os
import autogen
import memgpt.autogen.memgpt_agent as memgpt_autogen
import memgpt.autogen.interface as autogen_interface 
import memgpt.agent as agent
import memgpt.system as system
import memgpt.utils as utils
import memgpt.presets as presets
import memgpt.constants as constants
import memgpt.personas.personas as personas
import memgpt.humans.humans as humans
from memgpt.persistence_manager import InMemoryStateManager, InMemoryStateManagerWithPreloadedArchivalMemory, InMemoryStateManagerWithFaiss
import openai
openai.api_key = 'sk-asdasdasdasdasd'

config_list = [
    {
        'model': 'gpt-4'
    },
]

llm_config = {"config_list": config_list, "seed": 42}
user_proxy = memgpt_autogen.UserProxyAgent(
    name="User_proxy",
    system_message="A human admin.",
    code_execution_config={"last_n_messages": 2, "work_dir": "groupchat"},
)

interface = autogen_interface.AutoGenInterface() # how MemGPT talks to AutoGen
persistence_manager = InMemoryStateManager()
persona = "I\'m a 10x engineer at a FAANG tech company."
human = "I\'m a team manager at a FAANG tech company."
memgpt_agent = presets.use_preset(presets.SYNC_CHAT, None, 'gpt-4', persona, human, interface, persistence_manager)


# MemGPT coder
coder = memgpt_autogen.MemGPTAgent(
    name="MemGPT_coder",
    agent=memgpt_agent,
)

# non-MemGPT PM
pm = autogen.agentchat.AssistantAgent(
    name="Product_manager",
    system_message="Creative in software product ideas.",
    llm_config=llm_config,
)

groupchat = autogen.agentchat.GroupChat(agents=[user_proxy, coder, pm], messages=[], max_round=12)
manager = autogen.agentchat.GroupChatManager(groupchat=groupchat, llm_config=llm_config)

user_proxy.initiate_chat(manager, message="First send the message 'Let's go Mario!'")

Had to change some stuff.

Your code results to:


User_proxy (to chat_manager):

First send the message 'Let's go Mario!'

--------------------------------------------------------------------------------
Provide feedback to chat_manager. Press enter to skip and use auto-reply, or type 'exit' to end the conversation: Hello how are you?
User_proxy (to chat_manager):

Hello how are you?

--------------------------------------------------------------------------------
Traceback (most recent call last):
  File "/home/humanity/PycharmProjects/Luna_Stream/main.py", line 52, in <module>
    user_proxy.initiate_chat(manager, message="First send the message 'Let's go Mario!'")
  File "/home/humanity/PycharmProjects/Luna_Stream/venv/lib/python3.11/site-packages/autogen/agentchat/conversable_agent.py", line 531, in initiate_chat
    self.send(self.generate_init_message(**context), recipient, silent=silent)
  File "/home/humanity/PycharmProjects/Luna_Stream/venv/lib/python3.11/site-packages/autogen/agentchat/conversable_agent.py", line 334, in send
    recipient.receive(message, self, request_reply, silent)
  File "/home/humanity/PycharmProjects/Luna_Stream/venv/lib/python3.11/site-packages/autogen/agentchat/conversable_agent.py", line 462, in receive
    reply = self.generate_reply(messages=self.chat_messages[sender], sender=sender)
            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/humanity/PycharmProjects/Luna_Stream/venv/lib/python3.11/site-packages/autogen/agentchat/conversable_agent.py", line 781, in generate_reply
    final, reply = reply_func(self, messages=messages, sender=sender, config=reply_func_tuple["config"])
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/humanity/PycharmProjects/Luna_Stream/venv/lib/python3.11/site-packages/autogen/agentchat/groupchat.py", line 164, in run_chat
    reply = speaker.generate_reply(sender=self)
            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/humanity/PycharmProjects/Luna_Stream/venv/lib/python3.11/site-packages/autogen/agentchat/conversable_agent.py", line 781, in generate_reply
    final, reply = reply_func(self, messages=messages, sender=sender, config=reply_func_tuple["config"])
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/humanity/PycharmProjects/Luna_Stream/venv/lib/python3.11/site-packages/memgpt/autogen/memgpt_agent.py", line 235, in _generate_reply_for_user_message
    ) = self.agent.step(user_message, first_message=False, skip_verify=self.skip_verify)
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/humanity/PycharmProjects/Luna_Stream/venv/lib/python3.11/site-packages/memgpt/agent.py", line 657, in step
    raise e
  File "/home/humanity/PycharmProjects/Luna_Stream/venv/lib/python3.11/site-packages/memgpt/agent.py", line 587, in step
    context_window=self.config.context_window,
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^
AttributeError: 'NoneType' object has no attribute 'context_window'

@MarcRene
Copy link

import os
import autogen
import memgpt.autogen.memgpt_agent as memgpt_autogen
import memgpt.autogen.interface as autogen_interface
import memgpt.agent as agent
import memgpt.system as system
import memgpt.utils as utils
import memgpt.presets as presets
import memgpt.constants as constants
import memgpt.personas.personas as personas
import memgpt.humans.humans as humans
from memgpt.persistence_manager import InMemoryStateManager, InMemoryStateManagerWithPreloadedArchivalMemory, InMemoryStateManagerWithFaiss
import openai
openai.api_key = 'sk-asdasdasdasdasd'

config_list = [
{
'model': 'gpt-4'
},
]

llm_config = {"config_list": config_list, "seed": 42}
user_proxy = memgpt_autogen.UserProxyAgent(
name="User_proxy",
system_message="A human admin.",
code_execution_config={"last_n_messages": 2, "work_dir": "groupchat"},
)

interface = autogen_interface.AutoGenInterface() # how MemGPT talks to AutoGen
persistence_manager = InMemoryStateManager()
persona = "I'm a 10x engineer at a FAANG tech company."
human = "I'm a team manager at a FAANG tech company."
memgpt_agent = presets.use_preset(presets.SYNC_CHAT, None, 'gpt-4', persona, human, interface, persistence_manager)

MemGPT coder

coder = memgpt_autogen.MemGPTAgent(
name="MemGPT_coder",
agent=memgpt_agent,
)

non-MemGPT PM

pm = autogen.agentchat.AssistantAgent(
name="Product_manager",
system_message="Creative in software product ideas.",
llm_config=llm_config,
)

groupchat = autogen.agentchat.GroupChat(agents=[user_proxy, coder, pm], messages=[], max_round=12)
manager = autogen.agentchat.GroupChatManager(groupchat=groupchat, llm_config=llm_config)

user_proxy.initiate_chat(manager, message="First send the message 'Let's go Mario!'")

This worked for me perferctly...thanks for this.

@KenichiQaz
Copy link

KenichiQaz commented Nov 16, 2023

running this code I am getting this error.

This seems to be the problem but I can't seem to find the solution this.
import memgpt.presets as presets
...
AttributeError: module 'memgpt.presets' has no attribute 'use_preset'

fixed this problem with this

import memgpt.presets.presets as presets

@hijkzzz
Copy link

hijkzzz commented Nov 17, 2023

I met the problem AttributeError: 'dict' object has no attribute 'model_endpoint_type'

@slaughters85j
Copy link

slaughters85j commented Nov 20, 2023

I found success after mixing a few of the suggestions above.

  1. per Matt's video, I created a conda environment using 3.11.3
  2. I used the absolute path to the conda python env via 'which python' to install only pymemgpt and pyautogen
  3. Modified the import of presets to
import memgpt.presets.presets as presets
  1. Modified the code block in app.py to
interface = autogen_interface.AutoGenInterface() # how MemGPT talks to AutoGen
persistence_manager = InMemoryStateManager()
persona = "I am an engineer at Att3st tech company."
human = "I am a team manager at Att3st tech company."
agent_config = llm_config
memgpt_agent = presets.use_preset(presets.DEFAULT_PRESET, model='gpt-4', persona=persona, human=human, interface=interface, persistence_manager=persistence_manager,agent_config=agent_config)
  1. Modified the MemGPTAgent (memgpt_agent.py) file start at line 229 to:
    async def _generate_reply_for_user_message(
        self,
        messages: Optional[List[Dict]] = None,
        sender: Optional[Agent] = None,
        config: Optional[Any] = None,
    ) -> Tuple[bool, Union[str, Dict, None]]:
        self.agent.interface.reset_message_list()

        new_messages = self.find_new_messages(messages)
        if len(new_messages) > 1:
            if self.concat_other_agent_messages:
                # Combine all the other messages into one message
                user_message = "\n".join([self.format_other_agent_message(m) for m in new_messages])
            else:
                # Extend the MemGPT message list with multiple 'user' messages, then push the last one with agent.step()
                self.agent.messages.extend(new_messages[:-1])
                user_message = new_messages[-1]
        elif len(new_messages) == 1:
            user_message = new_messages[0]
        else:
            return True, self._default_auto_reply

        # Package the user message
        user_message = system.package_user_message(user_message)

        # Send a single message into MemGPT
        while True:
            (
                new_messages,
                heartbeat_request,
                function_failed,
                token_warning,
            ) = await self.agent.step(user_message, first_message=False, skip_verify=self.skip_verify)

It's now working.

@Geetha-muffin
Copy link

Geetha-muffin commented Nov 24, 2023

I found success after mixing a few of the suggestions above.

  1. per Matt's video, I created a conda environment using 3.11.3
  2. I used the absolute path to the conda python env via 'which python' to install only pymemgpt and pyautogen
  3. Modified the import of presets to
import memgpt.presets.presets as presets
  1. Modified the code block in app.py to
interface = autogen_interface.AutoGenInterface() # how MemGPT talks to AutoGen
persistence_manager = InMemoryStateManager()
persona = "I am an engineer at Att3st tech company."
human = "I am a team manager at Att3st tech company."
agent_config = llm_config
memgpt_agent = presets.use_preset(presets.DEFAULT_PRESET, model='gpt-4', persona=persona, human=human, interface=interface, persistence_manager=persistence_manager,agent_config=agent_config)
  1. Modified the MemGPTAgent (memgpt_agent.py) file start at line 229 to:
    async def _generate_reply_for_user_message(
        self,
        messages: Optional[List[Dict]] = None,
        sender: Optional[Agent] = None,
        config: Optional[Any] = None,
    ) -> Tuple[bool, Union[str, Dict, None]]:
        self.agent.interface.reset_message_list()

        new_messages = self.find_new_messages(messages)
        if len(new_messages) > 1:
            if self.concat_other_agent_messages:
                # Combine all the other messages into one message
                user_message = "\n".join([self.format_other_agent_message(m) for m in new_messages])
            else:
                # Extend the MemGPT message list with multiple 'user' messages, then push the last one with agent.step()
                self.agent.messages.extend(new_messages[:-1])
                user_message = new_messages[-1]
        elif len(new_messages) == 1:
            user_message = new_messages[0]
        else:
            return True, self._default_auto_reply

        # Package the user message
        user_message = system.package_user_message(user_message)

        # Send a single message into MemGPT
        while True:
            (
                new_messages,
                heartbeat_request,
                function_failed,
                token_warning,
            ) = await self.agent.step(user_message, first_message=False, skip_verify=self.skip_verify)

It's now working.

@slaughters85j Could you please share the complete code with requirements file ?

@Mugunthapranav
Copy link

User_proxy (to chat_manager):

First send the message 'Let's go Mario!'


Traceback (most recent call last):
File "C:\Users\Muguntha Pranav M\PycharmProjects\Auto_mem\main.py", line 51, in
user_proxy.initiate_chat(manager, message="First send the message 'Let's go Mario!'")
File "C:\Users\Muguntha Pranav M.conda\envs\Auto_mem\lib\site-packages\autogen\agentchat\conversable_agent.py", line 531, in initiate_chat
self.send(self.generate_init_message(**context), recipient, silent=silent)
File "C:\Users\Muguntha Pranav M.conda\envs\Auto_mem\lib\site-packages\autogen\agentchat\conversable_agent.py", line 334, in send
recipient.receive(message, self, request_reply, silent)
File "C:\Users\Muguntha Pranav M.conda\envs\Auto_mem\lib\site-packages\autogen\agentchat\conversable_agent.py", line 462, in receive
reply = self.generate_reply(messages=self.chat_messages[sender], sender=sender)
File "C:\Users\Muguntha Pranav M.conda\envs\Auto_mem\lib\site-packages\autogen\agentchat\conversable_agent.py", line 781, in generate_reply
final, reply = reply_func(self, messages=messages, sender=sender, config=reply_func_tuple["config"])
File "C:\Users\Muguntha Pranav M.conda\envs\Auto_mem\lib\site-packages\autogen\agentchat\groupchat.py", line 164, in run_chat
reply = speaker.generate_reply(sender=self)
File "C:\Users\Muguntha Pranav M.conda\envs\Auto_mem\lib\site-packages\autogen\agentchat\conversable_agent.py", line 781, in generate_reply
final, reply = reply_func(self, messages=messages, sender=sender, config=reply_func_tuple["config"])
File "C:\Users\Muguntha Pranav M.conda\envs\Auto_mem\lib\site-packages\memgpt\autogen\memgpt_agent.py", line 267, in _generate_reply_for_user_message
) = self.agent.step(user_message, first_message=False, skip_verify=self.skip_verify)
File "C:\Users\Muguntha Pranav M.conda\envs\Auto_mem\lib\site-packages\memgpt\agent.py", line 655, in step
raise e
File "C:\Users\Muguntha Pranav M.conda\envs\Auto_mem\lib\site-packages\memgpt\agent.py", line 575, in step
response = self.get_ai_reply(
File "C:\Users\Muguntha Pranav M.conda\envs\Auto_mem\lib\site-packages\memgpt\agent.py", line 780, in get_ai_reply
raise e
File "C:\Users\Muguntha Pranav M.conda\envs\Auto_mem\lib\site-packages\memgpt\agent.py", line 763, in get_ai_reply
context_window=self.context_window,
AttributeError: 'Agent' object has no attribute 'context_window'

How to solve this error?

@salmercker
Copy link

ok im on windows and no matter how i try to install this using pymemgpt pyautogen or memgpt autogen it doesnt get rid if the lines saying they aint installed

@eldados
Copy link

eldados commented Dec 2, 2023

ok im on windows and no matter how i try to install this using pymemgpt pyautogen or memgpt autogen it doesnt get rid if the lines saying they aint installed

Same here.
On a Mac, Conda environment, installed everything but still get a squiggly line under:
import memgpt.personas.personas as personas
import memgpt.humans.humans as humans

@owgit
Copy link

owgit commented Dec 3, 2023

ok im on windows and no matter how i try to install this using pymemgpt pyautogen or memgpt autogen it doesnt get rid if the lines saying they aint installed

Same here. On a Mac, Conda environment, installed everything but still get a squiggly line under: import memgpt.personas.personas as personas import memgpt.humans.humans as humans

I got same error

@rlam3
Copy link

rlam3 commented Dec 4, 2023

@mberman84
hey mberman84 do you have a new gist that you could share or a repo with a fix?

@marfal
Copy link

marfal commented Dec 6, 2023

Also problems with:
import memgpt.personas.personas as personas
import memgpt.humans.humans as humans

@imfurkaann
Copy link

ModuleNotFoundError: No module named 'memgpt.personas.personas'

how do i solve it

@jakeparkernet
Copy link

jakeparkernet commented Dec 10, 2023

Also problems with: import memgpt.personas.personas as personas import memgpt.humans.humans as humans

Also had this, fixed it. If you slow down the video to 0.25x speed, you can see he installs MemGPT 0.1.3 and AutoGen 0.1.13 at the 2:24 timestamp.
I'm running in a virtual environment on Windows. My commands:

pip uninstall pyautogen
pip uninstall pymemgpt
pip install pyautogen==0.1.13
pip install pymemgpt==0.1.3

Successfully ran original gist.

These things are changing and evolving so quickly, it might be a good idea to mention or post versions in future tutorials. Grateful for all the videos just the same.

EDIT: When I back to play the video, it was still 0.25x speed, highly recommend.

@twerpyfie
Copy link

Also problems with: import memgpt.personas.personas as personas import memgpt.humans.humans as humans

Also had this, fixed it. If you slow down the video to 0.25x speed, you can see he installs MemGPT 0.1.3 and AutoGen 0.1.13 at the 2:24 timestamp. I'm running in a virtual environment on Windows. My commands:

pip uninstall pyautogen pip uninstall pymemgpt pip install pyautogen==0.1.13 pip install pymemgpt==0.1.3

Successfully ran original gist.

These things are changing and evolving so quickly, it might be a good idea to mention or post versions in future tutorials. Grateful for all the videos just the same.

EDIT: When I back to play the video, it was still 0.25x speed, highly recommend.

Omg this works, thank you !!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment