Skip to content

Instantly share code, notes, and snippets.

@ruvnet
Created April 28, 2024 00:47
Show Gist options
  • Save ruvnet/c94eabcbb50e3d1b5017be303ec6760d to your computer and use it in GitHub Desktop.
Save ruvnet/c94eabcbb50e3d1b5017be303ec6760d to your computer and use it in GitHub Desktop.
Autonomous Swarm Intelligence

Autonomous Swarm Intelligence:

Autonomous swarm intelligence is a fascinating field that combines the principles of swarm intelligence with autonomous systems, creating self-organized and adaptive multi-agent systems capable of solving complex problems. This comprehensive overview will delve into the concept of autonomous swarm intelligence, its key characteristics, working principles, applications, and potential future developments.

Introduction to Autonomous Swarm Intelligence

Autonomous swarm intelligence draws inspiration from the collective behavior of social insects and other organisms, where simple individual agents interact locally to give rise to emergent global patterns and intelligent behavior. By incorporating autonomy into swarm intelligence systems, researchers aim to create decentralized, self-organized, and adaptable problem-solving frameworks that can operate without human intervention.

Key Characteristics of Autonomous Swarm Intelligence

  1. Decentralization: Autonomous swarm intelligence systems operate without a central control authority. Each agent makes decisions based on local information and interactions with other agents and the environment.

  2. Self-Organization: The global behavior and patterns of the swarm emerge from the local interactions and rules followed by individual agents. This self-organization allows the system to adapt and respond to changes in the environment.

  3. Autonomy: Agents in an autonomous swarm intelligence system are capable of making independent decisions and taking actions based on their own perceptions and internal states. They do not rely on external control or guidance.

  4. Scalability: Autonomous swarm intelligence systems can scale effectively, as the addition or removal of agents does not significantly impact the overall performance of the swarm. This scalability enables the system to handle large-scale problems and adapt to changing requirements.

  5. Robustness: The decentralized nature of autonomous swarm intelligence systems makes them robust against individual agent failures. The swarm can continue to function and achieve its objectives even if some agents malfunction or are removed from the system.

Working Principles of Autonomous Swarm Intelligence

Autonomous swarm intelligence systems operate based on a set of fundamental principles that govern the behavior and interactions of individual agents:

  1. Local Interactions: Agents in the swarm communicate and interact with their immediate neighbors, exchanging information and influencing each other's behavior. These local interactions give rise to the emergent global behavior of the swarm.

  2. Stigmergy: Agents in the swarm indirectly communicate through modifications to the environment. For example, ants leave pheromone trails to guide other ants towards food sources. This indirect communication allows agents to coordinate their actions without direct communication.

  3. Feedback Mechanisms: Positive and negative feedback loops play a crucial role in the self-organization of autonomous swarm intelligence systems. Positive feedback amplifies successful behaviors, while negative feedback helps to stabilize the system and prevent runaway processes.

  4. Adaptation and Learning: Autonomous swarm intelligence systems can adapt and learn from their experiences. Agents can modify their behavior based on the outcomes of their actions, allowing the swarm to optimize its performance over time.

Applications of Autonomous Swarm Intelligence

Autonomous swarm intelligence has found applications in various domains, leveraging its ability to solve complex problems through decentralized and self-organized approaches:

  1. Optimization: Swarm intelligence algorithms, such as Particle Swarm Optimization (PSO) and Ant Colony Optimization (ACO), have been successfully applied to solve optimization problems in fields like logistics, transportation, and resource allocation.

  2. Robotics: Swarm robotics involves the coordination and collaboration of multiple autonomous robots to achieve collective goals. Autonomous swarm intelligence principles have been employed in tasks such as exploration, search and rescue, and collective transportation.

  3. Networking: Autonomous swarm intelligence has been applied to optimize network routing, load balancing, and resource allocation in communication networks. Swarm-based algorithms can adapt to changing network conditions and provide robust and efficient solutions.

  4. Artificial Intelligence: Swarm intelligence principles have been integrated into AI systems to develop decentralized decision-making frameworks, multi-agent systems, and collective intelligence algorithms. These approaches can enhance the adaptability, scalability, and robustness of AI systems.

Future Developments and Challenges

As research in autonomous swarm intelligence progresses, several exciting developments and challenges lie ahead:

  1. Heterogeneous Swarms: Investigating the coordination and collaboration of heterogeneous agents with diverse capabilities and roles within a swarm can lead to more versatile and adaptable systems.

  2. Human-Swarm Interaction: Developing intuitive interfaces and control mechanisms for humans to interact with and guide autonomous swarm intelligence systems is crucial for their effective deployment in real-world scenarios.

  3. Swarm Learning: Incorporating machine learning techniques into autonomous swarm intelligence systems can enable swarms to learn and adapt their behavior based on past experiences and environmental feedback.

  4. Scalability and Robustness: Ensuring the scalability and robustness of autonomous swarm intelligence systems in large-scale, dynamic, and uncertain environments remains a significant challenge. Developing efficient communication protocols, fault-tolerant mechanisms, and adaptive strategies is essential.

  5. Ethical Considerations: As autonomous swarm intelligence systems become more prevalent, addressing ethical concerns related to their decision-making, accountability, and potential impact on society becomes increasingly important.

Implementation

Here is a detailed explanation of how to run the Phi-3 LLM from Microsoft locally using the E2B code interpreter Python library to deploy an autonomous swarm intelligence system:

Overview

Phi-3 is a small language model (SLM) recently released by Microsoft. It is designed to be lightweight and efficient while still providing strong performance on various NLP tasks. Phi-3-mini, the smallest variant, has 3.8 billion parameters and can run comfortably on devices like smartphones.[10][17]

The E2B code interpreter is an open-source Python library that allows running untrusted AI-generated code in secure sandboxed environments called micro VMs.[2][3] It supports streaming output and can be used to build AI apps like code execution, data analysis, tutors, and reasoning modules.

By combining Phi-3 with the E2B code interpreter, we can create an autonomous swarm intelligence system. This involves deploying multiple Phi-3 models in separate E2B sandboxes that communicate and coordinate with each other using WebSockets to perform tasks in a self-driven, asynchronous manner.

Architecture

The high-level architecture consists of:

  • A fleet of Phi-3 models, each running in its own E2B micro VM sandbox
  • A central coordinator that manages the swarm and assigns tasks
  • WebSocket connections between the models and coordinator for real-time communication[1][16]
  • Shared storage accessible by all models for persisting state and results

When deployed, the coordinator reads a purpose.md file that declaratively specifies the overall goal and capabilities of the swarm. It then dynamically generates prompts and allocates subtasks to the individual Phi-3 models. The models execute the tasks in their sandboxes, stream results back to the coordinator, and can also communicate laterally with each other as needed. This enables complex, multi-step workflows to be autonomously carried out by the swarm in pursuit of its defined purpose.

Setup

Prerequisites

  • Docker
  • Python 3.7+
  • An E2B API key

Installation

  1. Clone the repository:
git clone https://github.com/username/phi3-swarm-intelligence.git
cd phi3-swarm-intelligence
  1. Create a virtual environment and install dependencies:
python -m venv venv
source venv/bin/activate
pip install -r requirements.txt
  1. Set your E2B API key as an environment variable:
export E2B_API_KEY=your_api_key_here

Project Structure

phi3-swarm-intelligence/
├── coordinator/
│   ├── Dockerfile
│   ├── app.py
│   └── requirements.txt
├── phi3-model/
│   ├── Dockerfile
│   ├── app.py
│   └── requirements.txt
├── shared/
│   └── purpose.md
├── install.sh
└── deploy.py
  • coordinator/: Contains the code for the central coordinator

    • Dockerfile: Defines the coordinator's Docker image
    • app.py: Main coordinator application logic
    • requirements.txt: Python dependencies for the coordinator
  • phi3-model/: Contains the code for an individual Phi-3 model

    • Dockerfile: Defines the Phi-3 model's Docker image
    • app.py: Main model application logic
    • requirements.txt: Python dependencies for the model
  • shared/: Directory mounted to all containers for shared access

    • purpose.md: Markdown file specifying the swarm's purpose and capabilities
  • install.sh: Bash script to set up the project

  • deploy.py: Python script to deploy the swarm

Deployment

  1. Run the install script to set up the project:
./install.sh
  1. Deploy the swarm:
python deploy.py --num-models 5 --purpose shared/purpose.md

This will spin up 5 Phi-3 model instances and 1 coordinator, mounting the purpose.md file.

The deploy.py script looks like:

import argparse
import docker

def deploy_swarm(num_models, purpose_file):
    client = docker.from_env()

    # Deploy coordinator
    coordinator = client.containers.run(
        "phi3-coordinator",
        volumes={purpose_file: {"bind": "/app/purpose.md", "mode": "ro"}},
        detach=True,
    )

    # Deploy models
    models = []
    for _ in range(num_models):
        model = client.containers.run(
            "phi3-model", 
            volumes={purpose_file: {"bind": "/app/purpose.md", "mode": "ro"}},
            detach=True,
        )
        models.append(model)

    print(f"Deployed {num_models} models and 1 coordinator.")
    print(f"Coordinator ID: {coordinator.id}")
    print(f"Model IDs: {[m.id for m in models]}")

if __name__ == "__main__":
    parser = argparse.ArgumentParser()
    parser.add_argument("--num-models", type=int, required=True)
    parser.add_argument("--purpose", type=str, required=True)
    args = parser.parse_args()

    deploy_swarm(args.num_models, args.purpose)

The key aspects are:

  • It uses the Docker SDK to programmatically deploy containers
  • The purpose.md file is mounted into all containers
  • The coordinator and models are started in detached mode

Phi-3 Model Implementation

The app.py for an individual Phi-3 model looks like:

import os
import asyncio
import websockets
from e2b_code_interpreter import CodeInterpreter

E2B_API_KEY = os.environ["E2B_API_KEY"]
COORDINATOR_URL = os.environ["COORDINATOR_URL"]

async def run_model():
    # Connect to coordinator
    async with websockets.connect(COORDINATOR_URL) as websocket:
        # Load purpose.md
        with open("purpose.md") as f:
            purpose = f.read()
        
        # Initialize code interpreter
        with CodeInterpreter(api_key=E2B_API_KEY) as sandbox:
            while True:
                # Receive task from coordinator
                task = await websocket.recv()
                print(f"Received task: {task}")

                # Generate code to accomplish task
                prompt = f"""
                Here is the purpose of our swarm intelligence system:
                {purpose}

                Given that context, write Python code to accomplish this task:
                {task}
                """
                code = sandbox.notebook.generate_code(prompt)

                # Execute generated code
                execution = sandbox.notebook.exec_cell(code)
                print(f"Execution output: {execution.text}")

                # Send result back to coordinator 
                await websocket.send(execution.text)

if __name__ == "__main__":
    asyncio.run(run_model())

The key aspects are:

  • It connects to the coordinator using a WebSocket
  • It loads the shared purpose.md file on startup
  • Inside a loop, it receives tasks from the coordinator, generates code to accomplish them using the E2B code interpreter, executes that code, and streams the result back
  • Code generation is prompted using the swarm's overall purpose for context

Coordinator Implementation

The app.py for the coordinator looks like:

import os
import asyncio
import websockets

connected = set()

async def coordinator(websocket):
    connected.add(websocket)
    try:
        # Load purpose.md
        with open("purpose.md") as f:
            purpose = f.read()

        # Parse purpose into subtasks 
        subtasks = parse_purpose(purpose)

        # Assign subtasks to models
        for subtask in subtasks:
            await assign_task(subtask)

        # Collect and aggregate results
        results = []
        for _ in range(len(subtasks)):
            result = await websocket.recv()
            results.append(result)

        # Combine results and take action
        final_result = aggregate_results(results)
        take_action(final_result)

    finally:
        connected.remove(websocket)

async def assign_task(task):
    # Select an available model
    model = next(iter(connected))

    # Send task to model
    await model.send(task)

def parse_purpose(purpose):
    # Placeholder for parsing logic
    return ["subtask1", "subtask2", "subtask3"] 

def aggregate_results(results):
    # Placeholder for aggregation logic
    return "final_result"

def take_action(result):
    # Placeholder for action logic
    print(f"Taking action based on result: {result}")

async def main():
    async with websockets.serve(coordinator, "localhost", 8765):
        await asyncio.Future()  # run forever

if __name__ == "__main__":
    asyncio.run(main())

The key aspects are:

  • It listens for incoming WebSocket connections from models
  • When a model connects, it loads purpose.md, parses it into subtasks, and assigns those subtasks to available models
  • It collects the results from each model, aggregates them, and takes some final action
  • The parse_purpose, aggregate_results and take_action functions are placeholders for the actual logic which would be swarm-specific

Putting it All Together

With the coordinator and models implemented, the install.sh script simply needs to build the Docker images:

#!/bin/bash

# Create directories
mkdir -p coordinator phi3-model shared

# Create purpose.md
echo "# Swarm Intelligence System Purpose" > shared/purpose.md
echo "The purpose of this swarm intelligence system is to..." >> shared/purpose.md

# Build Docker images
docker build -t phi3-coordinator coordinator/
docker build -t phi3-model phi3-model/

When deploy.py is run, it will:

  1. Spin up the specified number of Phi-3 models, each in its own E2B sandbox
  2. Spin up the coordinator
  3. Mount the purpose.md file into all containers
  4. Print the IDs of the deployed containers

The models will connect to the coordinator, receive subtasks, generate and execute code to accomplish those subtasks using the E2B interpreter, and stream the results back.

The coordinator will collect the results, aggregate them, and take some final action based on the swarm's defined purpose.

This architecture enables a fleet of Phi-3 models to work together as an autonomous swarm intelligence system to accomplish complex tasks in a self-driven, asynchronous manner.

The specific logic for parsing the purpose into subtasks, aggregating results, and taking actions would depend on the use case. But the general framework of coordinator, models, E2B sandboxes, and WebSocket communication provides a powerful foundation for deploying this type of AI system.

Let me know if you have any other questions! I'd be happy to elaborate on any part of the implementation.

Conclusion

Autonomous swarm intelligence represents a promising approach to solving complex problems by leveraging the collective intelligence and self-organization of decentralized multi-agent systems. By combining the principles of swarm intelligence with autonomy, researchers aim to create adaptable, scalable, and robust problem-solving frameworks that can operate without human intervention.

As research in this field progresses, autonomous swarm intelligence has the potential to revolutionize various domains, from optimization and robotics to networking and artificial intelligence. However, challenges related to heterogeneity, human-swarm interaction, scalability, and ethical considerations must be addressed to fully realize the potential of autonomous swarm intelligence systems.

By continuing to explore the principles and applications of autonomous swarm intelligence, researchers and practitioners can unlock new possibilities for solving complex problems and developing intelligent systems that can adapt and thrive in dynamic environments.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment