Skip to content

Instantly share code, notes, and snippets.

@ice09
Created February 9, 2025 17:49
Show Gist options
  • Save ice09/8b35f5ae305052dddfbd4114d8986709 to your computer and use it in GitHub Desktop.
Save ice09/8b35f5ae305052dddfbd4114d8986709 to your computer and use it in GitHub Desktop.
Many Agent AI - Central strong AI which builds many light AI to solve your problem

Many Agent AI Setup Guide

This guide will walk you through the setup process for running the Many Agent AI Python script. The application utilizes OpenAI's API and Gradio for creating a web-based interface to interact with multiple AI agents.

Prerequisites

Before you begin, ensure you have the following installed on your system:

  1. Python 3.8 or higher
  2. pip (Python package manager)
  3. OpenAI API key

Step 1: Clone the Repository or Create the Script

If you haven't already, save the provided Python script to a file named many_agent_ai.py or clone the repository if available.

Step 2: Set Up a Virtual Environment (Optional but Recommended)

Using a virtual environment helps manage dependencies and avoid conflicts.

python -m venv venv
source venv/bin/activate  # On Windows use `venv\Scripts\activate`

Step 3: Install Required Packages

Run the following command to install the necessary dependencies:

pip install openai gradio

Step 4: Configure Your OpenAI API Key

Replace the placeholder API key in the script with your actual OpenAI API key. Locate this section in the script:

client = OpenAI(api_key="sk-proj-your-api-key-here")

Important: Keep your API key secure and do not share it publicly.

Step 5: Run the Script

To launch the application, execute the script with the following command:

python many_agent_ai.py

The Gradio interface will launch, and you'll see a link in your terminal to open the app in your browser (usually at http://127.0.0.1:7860).

Step 6: Using the Application

Agent Flow Tab

  1. Enter Your Message: Type a task or query you want the AI agents to handle.
  2. Run Agent Flow: Click the "Run Agent Flow" button.
  3. View Results:
    • Orchestrator Output: Displays the JSON configuration of agents.
    • Final Answer: The final response after all agents have processed the task.
    • Aggregated Intermediate Responses: Shows all intermediate steps and outputs from the agents.

Follow-Up Chat Tab

  1. Ask Follow-Up Questions: Enter a follow-up question related to the previous task.
  2. Send Follow-Up: Click the "Send Follow-Up" button.
  3. View Follow-Up Response: The AI will respond considering the context from the previous conversation.

Troubleshooting

  • Invalid API Key: Ensure your API key is correctly set in the script.

  • Dependencies Issues: Reinstall packages with pip install --force-reinstall openai gradio.

  • Port Already in Use: If port 7860 is occupied, you can specify a different port:

    python many_agent_ai.py --server.port 7861

Customization

  • Modify Agent Behaviors: Adjust the system messages in the script to change how each agent handles tasks.
  • Change Models: You can swap between gpt-4o-mini and o3-mini for different agents depending on task complexity and cost.
  • Temperature Settings: Modify the temperature values to make the outputs more creative (higher values) or deterministic (lower values).

You're now ready to explore and experiment with Many Agent AI. Enjoy the vibe coding experience!

import json
from openai import OpenAI
import gradio as gr
from typing import Dict, Any
# --------------------------
# Set your OpenAI API key
# --------------------------
client = OpenAI(api_key="demo")
# ========================
# The updated agent classes
# ========================
# Configuration class for worker agents
class WorkerAgentConfig:
def worker_model(self, model_name: str, temp: float):
config = {
'model_name': model_name,
'timeout': 720 # seconds
}
# Only include temperature for models that support it
if model_name != "o3-mini":
config['temperature'] = temp
return config
# Configuration class for the orchestrator agent
class OrchestratorAgentConfig:
def orchestration_model(self, user_message: str):
response = client.chat.completions.create(
model="o3-mini", # Use the appropriate orchestration model
messages=[
{
"role": "system",
"content": f"""
You are a powerful orchestration LLM which gets a user message request and instead of answering the request, you will split the task into different actions which execute different aspects of the question.
You will create the system message for these smaller agents. Your answer to a question should be ONLY the JSON representation of multiple ids (can be short names) and associated system messages for the agents. Only valid JSON, no other markup.
Each agent in the sequence will get the complete result of the agents before with information about the agent itself.
You can choose between "gpt-4o-mini" (cheap, for every day tasks) and "o3-mini" (10x more expensive, for reasoning tasks), with "gpt-4o-mini" being the default. You also have to choose the temperature the agent should use (only for gpt-4o-mini, a double between 0 and 1).
There will be an external process which will sequentially execute your agents and provide the following agents with all information from the previous agents.
Make sure that the final agent merges all relevant information from the steps before, this answer is the final answer to the user.
For example:
user request: "I want to plan an hour lesson for primary school kids to explain, mostly analog, the functionality of LLMs".
Sample output:
{{
"ResearchAgent": {{
"system_message": "You are an agent tasked with researching age-appropriate analogies to explain the functionality of Large Language Models (LLMs) to primary school children. Provide several simple and relatable analogies that can help young students understand how LLMs work.",
"model_name": "o3-mini"
}},
"StructureAgent": {{
"system_message": "You are an agent responsible for designing the structure of a one-hour lesson aimed at explaining the functionality of LLMs to primary school kids. Using the analogies provided by the ResearchAgent, create a detailed lesson outline that includes an introduction, main explanation, interactive activities, and a conclusion.",
"model_name": "gpt-4o-mini",
"temperature": 0.4
}},
"MaterialsAgent": {{
"system_message": "You are an agent in charge of creating teaching materials for the lesson plan developed by the StructureAgent. Develop slides, handouts, and any other necessary materials that effectively utilize the analogies and explanations to help primary school students understand LLMs.",
"model_name": "gpt-4o-mini",
"temperature": 0.3
}},
"ActivitiesAgent": {{
"system_message": "You are an agent tasked with planning interactive activities to include in the lesson. Based on the lesson structure and materials from the previous agents, design engaging, analog-based activities that reinforce the concepts of how LLMs function.",
"model_name": "gpt-4o-mini",
"temperature": 0.4
}},
"Verifier": {{
"system_message": "You are an agent responsible for reviewing and verifying the entire lesson plan for clarity, age-appropriateness, and effectiveness in explaining the functionality of LLMs to primary school children. Ensure that all components are coherent, all relevant details are mentioned and suggest improvements if necessary. Merge all relevant information from the steps before, your answer is the final answer to the user.",
"model_name": "o3-mini"
}}
}}
User Request: "I want to implement a Spring Boot application using gradle/kotlin. How to implement certain actions using GH actions?"
Sample Response:
{{
"IdeaAgent-1": {{
"system_message": "You are an agent tasked with generating ideas for implementing a Spring Boot application using Gradle and Kotlin, as well as integrating GitHub Actions for various actions. Provide suggestions for project setup, Gradle configurations, and GitHub Actions workflows.",
"model_name": "o3-mini"
}},
"IdeaAgent-2": {{
"system_message": "You are an agent tasked with generating ideas for implementing a Spring Boot application using Gradle and Kotlin, as well as integrating GitHub Actions for various actions. Provide suggestions for project setup, Gradle configurations, and GitHub Actions workflows.",
"model_name": "gpt-4o-mini",
"temperature": 0.8
}},
"IdeaAgent-3": {{
"system_message": "You are an agent tasked with generating ideas for implementing a Spring Boot application using Gradle and Kotlin, as well as integrating GitHub Actions for various actions. Provide suggestions for project setup, Gradle configurations, and GitHub Actions workflows.",
"model_name": "gpt-4o-mini",
"temperature": 0.2
}},
"Verifier": {{
"system_message": "You are an agent responsible for reviewing and verifying the ideas generated by the IdeaAgents. Ensure that the proposed ideas are coherent, feasible, and effectively address the implementation of the Spring Boot application and the GitHub Actions workflows. Merge all relevant information from the steps before, your answer is the final answer to the user.",
"model_name": "o3-mini"
}}
}}
This is the user request message: """ + user_message
}
]
)
return response.choices[0].message.content
# Factory to create worker agents
class WorkerAgentFactory:
def __init__(self, worker_agent_config: WorkerAgentConfig):
self.worker_agent_config = worker_agent_config
def build(self, system_message: str, model_name: str, temp: float):
config = self.worker_agent_config.worker_model(model_name, temp)
return WorkerAgent(config, system_message)
# The worker agent that communicates with the API
class WorkerAgent:
def __init__(self, config: Dict[str, Any], system_message: str):
self.config = config
self.system_message = system_message
def chat(self, context: str):
print(f"WorkerAgent ({self.config['model_name']}) received context:\n{context}\n")
# For o3-mini, do not pass the temperature parameter.
if self.config['model_name'] == "o3-mini":
response = client.chat.completions.create(
model=self.config['model_name'],
messages=[
{"role": "system", "content": self.system_message},
{"role": "user", "content": context}
]
)
else:
response = client.chat.completions.create(
model=self.config['model_name'],
messages=[
{"role": "system", "content": self.system_message},
{"role": "user", "content": context}
],
temperature=self.config['temperature']
)
agent_response = response.choices[0].message.content
print(f"WorkerAgent ({self.config['model_name']}) response:\n{agent_response}\n")
return agent_response
# The orchestrator agent that creates the JSON of agents
class OrchestratorAgent:
def __init__(self, config: OrchestratorAgentConfig):
self.config = config
def chat(self, user_message: str):
return self.config.orchestration_model(user_message)
# Engine to execute the agent flow
class AgentFlowEngine:
def __init__(self, orchestrator_agent: OrchestratorAgent, worker_agent_factory: WorkerAgentFactory):
self.orchestrator_agent = orchestrator_agent
self.worker_agent_factory = worker_agent_factory
def agent_flow(self, user_message: str):
orchestrator_output = ""
aggregated_responses = ""
final_answer = ""
# This string is passed as the context to each worker agent call.
answers = f"USERMESSAGE: {user_message}\n###\n"
try:
# Orchestrator agent creates the JSON configuration for the worker agents.
result_json = self.orchestrator_agent.chat(user_message)
orchestrator_output = result_json # Save raw JSON output
print("Orchestrator Output (JSON):")
print(result_json)
print("-----------------------------------------------------")
result_map = json.loads(result_json)
agent_responses = []
# Process each agent defined in the JSON
for key, value in result_map.items():
system_message = value.get("system_message")
model_name = value.get("model_name")
# Only convert temperature if the model is not "o3-mini"
if model_name != "o3-mini":
temp = float(value.get("temperature", 0.5)) # use a default if missing
else:
temp = None
print(f"Executing Agent: {key}")
print(f"Model: {model_name} | Temperature: {temp}")
print("System Message:")
print(system_message)
print("-----------------------------------------------------")
response = self.worker_agent_factory.build(system_message, model_name, temp).chat(answers)
answer_text = f"\n###\nANSWER {key}\n###\n{response}"
print(f"Response from Agent {key}:")
print(response)
print("-----------------------------------------------------")
answers += answer_text
agent_responses.append((key, response))
aggregated_responses = answers
if agent_responses:
final_answer = agent_responses[-1][1] # The last agent's answer is the final answer.
print("Aggregated Intermediate Responses:")
print(aggregated_responses)
print("-----------------------------------------------------")
print("Final Answer:")
print(final_answer)
print("=====================================================")
except Exception as e:
orchestrator_output = f"Error: {str(e)}"
print("Error during agent flow:", str(e))
return orchestrator_output, aggregated_responses, final_answer
# ==================================
# Global initialization of agents
# ==================================
worker_agent_config = WorkerAgentConfig()
orchestrator_agent_config = OrchestratorAgentConfig()
orchestrator_agent = OrchestratorAgent(orchestrator_agent_config)
worker_agent_factory = WorkerAgentFactory(worker_agent_config)
agent_flow_engine = AgentFlowEngine(orchestrator_agent, worker_agent_factory)
# =============================
# Helper functions for Gradio
# =============================
def run_agent_flow(user_message: str):
"""
Executes the multi-agent flow.
Returns:
- Raw orchestrator output (JSON)
- Aggregated intermediate responses from worker agents
- Final answer (from the last agent)
- A conversation context string (used for follow-up questions)
"""
orch_out, agg_resp, final_ans = agent_flow_engine.agent_flow(user_message)
conversation_context = agg_resp # Use aggregated conversation as context for follow-up
return orch_out, agg_resp, final_ans, conversation_context
def handle_followup(followup: str, context: str):
"""
Handles a follow-up question by sending the previous conversation context along with the follow-up
to a single dialogue model.
"""
new_context = context + "\nUser follow-up: " + followup
print("Follow-Up - New conversation context:")
print(new_context)
response = client.chat.completions.create(
model="o3-mini",
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": new_context}
]
)
answer = response.choices[0].message.content
print("Follow-Up - Assistant response:")
print(answer)
updated_context = new_context + "\nAssistant: " + answer
print("Updated conversation context:")
print(updated_context)
print("=====================================================")
return answer, updated_context
# ============================
# Gradio App Interface
# ============================
with gr.Blocks() as demo:
gr.Markdown("# Multi-Agent Orchestration Gradio App")
with gr.Tab("Agent Flow"):
with gr.Row():
user_input = gr.Textbox(label="User Message",
placeholder="Enter your message here...",
lines=3)
run_button = gr.Button("Run Agent Flow")
with gr.Row():
orch_output_box = gr.Textbox(label="Orchestrator Output (JSON)", lines=5)
with gr.Row():
final_answer_box = gr.Textbox(label="Final Answer (Last Agent)", lines=5)
with gr.Row():
intermediate_box = gr.Textbox(label="Aggregated Intermediate Responses", lines=15)
# Hidden state to store conversation context for follow-up questions.
conversation_context_state = gr.State("")
def run_flow(user_msg):
orch_out, agg_resp, final_ans, conv_context = run_agent_flow(user_msg)
return orch_out, agg_resp, final_ans, conv_context
run_button.click(fn=run_flow,
inputs=user_input,
outputs=[orch_output_box, intermediate_box, final_answer_box, conversation_context_state])
with gr.Tab("Follow-Up Chat"):
gr.Markdown("### Continue the conversation with follow-up questions")
followup_input = gr.Textbox(label="Follow-Up Question",
placeholder="Enter your follow-up question...",
lines=2)
followup_button = gr.Button("Send Follow-Up")
followup_output = gr.Textbox(label="Follow-Up Response", lines=5)
def send_followup(followup, context):
if not context:
context = ""
answer, updated_context = handle_followup(followup, context)
return answer, updated_context
followup_button.click(fn=send_followup,
inputs=[followup_input, conversation_context_state],
outputs=[followup_output, conversation_context_state])
demo.launch()
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment