Skip to content

πŸš€ MassGen: An Open-source Multi-Agent Scaling System Inspired by Grok Heavy and Gemini Deep Think. Join the discord channel: https://discord.com/invite/VVrT2rQaz5

License

Notifications You must be signed in to change notification settings

Leezekun/MassGen

Repository files navigation

MassGen Logo

Python 3.11+ License Join our Discord

πŸš€ MassGen: Multi-Agent Scaling System for GenAI

MassGen is a cutting-edge multi-agent system that leverages the power of collaborative AI to solve complex tasks.

MassGen case study -- Berkeley Agentic AI Summit Question

Multi-agent scaling through intelligent collaboration in Grok Heavy style

MassGen is a cutting-edge multi-agent system that leverages the power of collaborative AI to solve complex tasks. It assigns a task to multiple AI agents who work in parallel, observe each other's progress, and refine their approaches to converge on the best solution to deliver a comprehensive and high-quality result. The power of this "parallel study group" approach is exemplified by advanced systems like xAI's Grok Heavy and Google DeepMind's Gemini Deep Think.

This project started with the "threads of thought" and "iterative refinement" ideas presented in The Myth of Reasoning, and extends the classic "multi-agent conversation" idea in AG2. Here is a video recording of the background context introduction presented at the Berkeley Agentic AI Summit 2025.


πŸ“‹ Table of Contents

✨ Key Features

πŸ†• Latest Features

πŸ—οΈ System Design

πŸš€ Quick Start

πŸ’‘ Case Studies & Examples

πŸ—ΊοΈ Roadmap

πŸ“š Additional Resources


✨ Key Features

Feature Description
🀝 Cross-Model/Agent Synergy Harness strengths from diverse frontier model-powered agents
⚑ Parallel Processing Multiple agents tackle problems simultaneously
πŸ‘₯ Intelligence Sharing Agents share and learn from each other's work
πŸ”„ Consensus Building Natural convergence through collaborative refinement
πŸ“Š Live Visualization See agents' working processes in real-time

πŸ†• Latest Features (v0.0.32)

What's New in v0.0.32:

  • Docker Execution Mode - Run agent commands in isolated Docker containers for enhanced security
  • MCP Architecture Refactoring - Simplified and streamlined MCP integration with cleaner codebase
  • Claude Code Docker Integration - Automatic Bash tool management and seamless Docker support

Try v0.0.32 Features:

# Docker isolated execution - secure command execution in containers
uv run python -m massgen.cli \
  --config massgen/configs/tools/code-execution/docker_simple.yaml \
  "Write a factorial function and test it"

# Multi-agent Docker deployment - each agent in isolated container
uv run python -m massgen.cli \
  --config massgen/configs/tools/code-execution/docker_multi_agent.yaml \
  "Build a Flask website about Bob Dylan"

# Claude Code with Docker - automatic tool management
uv run python -m massgen.cli \
  --config massgen/configs/tools/code-execution/docker_claude_code.yaml \
  "Build a Flask website about Bob Dylan"

# Resource-limited Docker execution - production-ready setup
uv run python -m massgen.cli \
  --config massgen/configs/tools/code-execution/docker_with_resource_limits.yaml \
  "Fetch data from an API and analyze it"

β†’ See all release examples


πŸ—οΈ System Design

MassGen operates through an architecture designed for seamless multi-agent collaboration:

graph TB
    O[πŸš€ MassGen Orchestrator<br/>πŸ“‹ Task Distribution & Coordination]

    subgraph Collaborative Agents
        A1[Agent 1<br/>πŸ—οΈ Anthropic/Claude + Tools]
        A2[Agent 2<br/>🌟 Google/Gemini + Tools]
        A3[Agent 3<br/>πŸ€– OpenAI/GPT + Tools]
        A4[Agent 4<br/>⚑ xAI/Grok + Tools]
    end

    H[πŸ”„ Shared Collaboration Hub<br/>πŸ“‘ Real-time Notification & Consensus]

    O --> A1 & A2 & A3 & A4
    A1 & A2 & A3 & A4 <--> H

    classDef orchestrator fill:#e1f5fe,stroke:#0288d1,stroke-width:3px
    classDef agent fill:#f3e5f5,stroke:#7b1fa2,stroke-width:2px
    classDef hub fill:#e8f5e8,stroke:#388e3c,stroke-width:2px

    class O orchestrator
    class A1,A2,A3,A4 agent
    class H hub
Loading

The system's workflow is defined by the following key principles:

Parallel Processing - Multiple agents tackle the same task simultaneously, each leveraging their unique capabilities (different models, tools, and specialized approaches).

Real-time Collaboration - Agents continuously share their working summaries and insights through a notification system, allowing them to learn from each other's approaches and build upon collective knowledge.

Convergence Detection - The system intelligently monitors when agents have reached stability in their solutions and achieved consensus through natural collaboration rather than forced agreement.

Adaptive Coordination - Agents can restart and refine their work when they receive new insights from others, creating a dynamic and responsive problem-solving environment.

This collaborative approach ensures that the final output leverages collective intelligence from multiple AI systems, leading to more robust and well-rounded results than any single agent could achieve alone.


πŸš€ Quick Start

1. πŸ“₯ Installation

Core Installation (requires Python 3.11+):

git clone https://github.com/Leezekun/MassGen.git
cd MassGen

pip install uv
uv venv

# Optional: Install AG2 framework integration (only needed for AG2 configs)
# uv pip install -e ".[external]"

Global Installation using uv tool (Recommended for multi-directory usage):

Install MassGen using uv tool for isolated, global access:

# Clone the repository
git clone https://github.com/Leezekun/MassGen.git
cd MassGen

# Install MassGen as a global tool in editable mode
uv tool install -e .

# Optional: Install AG2 framework integration (only needed for AG2 configs)
# uv pip install -e ".[external]"

# Now run from any directory
cd ~/projects/website
uv tool run massgen --config tools/filesystem/gemini_gpt5_filesystem_multiturn.yaml

cd ~/documents/research
uv tool run massgen --config tools/filesystem/gemini_gpt5_filesystem_multiturn.yaml

Benefits of uv tool installation:

  • βœ… Isolated Python environment (no conflicts with system Python)
  • βœ… Available globally from any directory
  • βœ… Editable mode (-e .) allows live development
  • βœ… Easy updates with git pull (editable mode)
  • βœ… Clean uninstall with uv tool uninstall massgen

Optional Dependencies:

# AG2 Framework Integration (for external agent frameworks)
uv pip install -e ".[external]"

Optional CLI Tools (for enhanced capabilities):

# Claude Code CLI - Advanced coding assistant
npm install -g @anthropic-ai/claude-code

# LM Studio - Local model inference
# For MacOS/Linux
sudo ~/.lmstudio/bin/lms bootstrap
# For Windows
cmd /c %USERPROFILE%/.lmstudio/bin/lms.exe bootstrap

2. πŸ” API Configuration

Using the template file .env.example to create a .env file in the massgen directory with your API keys. Note that only the API keys of the models used by your MassGen agent team is needed.

# Copy example configuration
cp .env.example .env

Useful links to get API keys:

3. 🧩 Supported Models and Tools

Models

The system currently supports multiple model providers with advanced capabilities:

API-based Models:

  • Azure OpenAI (NEW in v0.0.10): GPT-4, GPT-4o, GPT-3.5-turbo, GPT-4.1, GPT-5-chat
  • Cerebras AI: GPT-OSS-120B...
  • Claude: Claude Haiku 3.5, Claude Sonnet 4, Claude Opus 4...
  • Claude Code: Native Claude Code SDK with comprehensive dev tools
  • Gemini: Gemini 2.5 Flash, Gemini 2.5 Pro...
  • Grok: Grok-4, Grok-3, Grok-3-mini...
  • OpenAI: GPT-5 series (GPT-5, GPT-5-mini, GPT-5-nano)...
  • Together AI, Fireworks AI, Groq, Kimi/Moonshot, Nebius AI Studio, OpenRouter, POE: LLaMA, Mistral, Qwen...
  • Z AI: GLM-4.5

Local Model Support:

  • vLLM & SGLang (ENHANCED in v0.0.25): Unified inference backend supporting both vLLM and SGLang servers

    • Auto-detection between vLLM (port 8000) and SGLang (port 30000) servers
    • Support for both vLLM and SGLang-specific parameters (top_k, repetition_penalty, separate_reasoning)
    • Mixed server deployments with configuration example: two_qwen_vllm_sglang.yaml
  • LM Studio (v0.0.7+): Run open-weight models locally with automatic server management

    • Automatic LM Studio CLI installation
    • Auto-download and loading of models
    • Zero-cost usage reporting
    • Support for LLaMA, Mistral, Qwen and other open-weight models

More providers and local inference engines (sglang) are welcome to be added.

Tools

MassGen agents can leverage various tools to enhance their problem-solving capabilities. Both API-based and CLI-based backends support different tool capabilities.

Supported Built-in Tools by Backend:

Backend Live Search Code Execution File Operations MCP Support Multimodal (Image/Audio/Video) Advanced Features
Azure OpenAI (NEW in v0.0.10) ❌ ❌ ❌ ❌ ❌ Code interpreter, Azure deployment management
Claude API βœ… βœ… βœ… βœ… βœ… Web search, code interpreter, MCP integration
Claude Code βœ… βœ… βœ… βœ… βœ…
Image
Native Claude Code SDK, comprehensive dev tools, MCP integration
Gemini API βœ… βœ… βœ… βœ… βœ…
Image
Web search, code execution, MCP integration
Grok API βœ… ❌ βœ… βœ… ❌ Web search, MCP integration
OpenAI API βœ… βœ… βœ… βœ… βœ…
Image
Web search, code interpreter, MCP integration
ZAI API ❌ ❌ βœ… βœ… ❌ MCP integration

Note: Audio/video multimodal support (NEW in v0.0.30) is available through Chat Completions-based providers like OpenRouter and Qwen API. See configuration examples: single_openrouter_audio_understanding.yaml, single_qwen_video_understanding.yaml

4. πŸƒ Run MassGen

πŸš€ Getting Started

CLI Configuration Parameters

Parameter Description
--config Path to YAML configuration file with agent definitions, model parameters, backend parameters and UI settings
--backend Backend type for quick setup without a config file (claude, claude_code, gemini, grok, openai, azure_openai, zai). Optional for models with default backends.
--model Model name for quick setup (e.g., gemini-2.5-flash, gpt-5-nano, ...). --config and --model are mutually exclusive - use one or the other.
--system-message System prompt for the agent in quick setup mode. If --config is provided, --system-message is omitted.
--no-display Disable real-time streaming UI coordination display (fallback to simple text output).
--no-logs Disable real-time logging.
--debug Enable debug mode with verbose logging (NEW in v0.0.13). Shows detailed orchestrator activities, agent messages, backend operations, and tool calls. Debug logs are saved to agent_outputs/log_{time}/massgen_debug.log.
"<your question>" Optional single-question input; if omitted, MassGen enters interactive chat mode.

1. Single Agent (Easiest Start)

Quick Start Commands:

# Quick test with any supported model - no configuration needed
uv run python -m massgen.cli --model claude-3-5-sonnet-latest "What is machine learning?"
uv run python -m massgen.cli --model gemini-2.5-flash "Explain quantum computing"
uv run python -m massgen.cli --model gpt-5-nano "Summarize the latest AI developments"

Configuration:

Use the agent field to define a single agent with its backend and settings:

agent:
  id: "<agent_name>"
  backend:
    type: "azure_openai" | "chatcompletion" | "claude" | "claude_code" | "gemini" | "grok" | "openai" | "zai" | "lmstudio" #Type of backend
    model: "<model_name>" # Model name
    api_key: "<optional_key>"  # API key for backend. Uses env vars by default.
  system_message: "..."    # System Message for Single Agent

β†’ See all single agent configs

2. Multi-Agent Collaboration (Recommended)

Configuration:

Use the agents field to define multiple agents, each with its own backend and config:

Quick Start Commands:

# Three powerful agents working together - Gemini, GPT-5, and Grok
uv run python -m massgen.cli \
  --config massgen/configs/basic/multi/three_agents_default.yaml \
  "Analyze the pros and cons of renewable energy"

This showcases MassGen's core strength:

  • Gemini 2.5 Flash - Fast research with web search
  • GPT-5 Nano - Advanced reasoning with code execution
  • Grok-3 Mini - Real-time information and alternative perspectives
agents:  # Multiple agents (alternative to 'agent')
  - id: "<agent1 name>"
    backend:
      type: "azure_openai" | "chatcompletion" | "claude" | "claude_code" | "gemini" | "grok" | "openai" |  "zai" | "lmstudio" #Type of backend
      model: "<model_name>" # Model name
      api_key: "<optional_key>"  # API key for backend. Uses env vars by default.
    system_message: "..."    # System Message for Single Agent
  - id: "..."
    backend:
      type: "..."
      model: "..."
      ...
    system_message: "..."

β†’ Explore more multi-agent setups

3. Model context protocol (MCP)

The Model context protocol (MCP) standardises how applications expose tools and context to language models. From the official documentation:

MCP is an open protocol that standardizes how applications provide context to LLMs. Think of MCP like a USB-C port for AI applications. Just as USB-C provides a standardized way to connect your devices to various peripherals and accessories, MCP provides a standardized way to connect AI models to different data sources and tools.

MCP Configuration Parameters:

Parameter Type Required Description
mcp_servers dict Yes (for MCP) Container for MCP server definitions
└─ type string Yes Transport: "stdio" or "streamable-http"
└─ command string stdio only Command to run the MCP server
└─ args list stdio only Arguments for the command
└─ url string http only Server endpoint URL
└─ env dict No Environment variables to pass
allowed_tools list No Whitelist specific tools (if omitted, all tools available)
exclude_tools list No Blacklist dangerous/unwanted tools

Quick Start Commands (Check backend MCP support here):

# Weather service with GPT-5
uv run python -m massgen.cli \
  --config massgen/configs/tools/mcp/gpt5_nano_mcp_example.yaml \
  "What's the weather forecast for New York this week?"

# Multi-tool MCP with Gemini - Search + Weather + Filesystem (Requires BRAVE_API_KEY in .env)
uv run python -m massgen.cli \
  --config massgen/configs/tools/mcp/multimcp_gemini.yaml \
  "Find the best restaurants in Paris and save the recommendations to a file"

Configuration:

agents:
  # Basic MCP Configuration:
  backend:
    type: "openai"              # Your backend choice
    model: "gpt-5-mini"         # Your model choice

    # Add MCP servers here
    mcp_servers:
      weather:                  # Server name (you choose this)
        type: "stdio"           # Communication type
        command: "npx"          # Command to run
        args: ["-y", "@modelcontextprotocol/server-weather"]  # MCP server package

  # That's it! The agent can now check weather.

  # Multiple MCP Tools Example:
  backend:
    type: "gemini"
    model: "gemini-2.5-flash"
    mcp_servers:
      # Web search
      search:
        type: "stdio"
        command: "npx"
        args: ["-y", "@modelcontextprotocol/server-brave-search"]
        env:
          BRAVE_API_KEY: "${BRAVE_API_KEY}"  # Set in .env file

      # HTTP-based MCP server (streamable-http transport)
      custodm_api:
        type: "streamable-http"   # For HTTP/SSE servers
        url: "http://localhost:8080/mcp/sse"  # Server endpoint


  # Tool configuration (MCP tools are auto-discovered)
  allowed_tools:                        # Optional: whitelist specific tools
    - "mcp__weather__get_current_weather"
    - "mcp__test_server__mcp_echo"
    - "mcp__test_server__add_numbers"

  exclude_tools:                        # Optional: blacklist specific tools
    - "mcp__test_server__current_time"

β†’ View more MCP examples

4. File System Operations & Workspace Management

MassGen provides comprehensive file system support through multiple backends, enabling agents to read, write, and manipulate files in organized workspaces.

Filesystem Configuration Parameters:

Parameter Type Required Description
cwd string Yes (for file ops) Working directory for file operations (agent-specific workspace)
snapshot_storage string Yes Directory for workspace snapshots
agent_temporary_workspace string Yes Parent directory for temporary workspaces

Quick Start Commands:

# File operations with Claude Code
uv run python -m massgen.cli \
  --config massgen/configs/tools/filesystem/claude_code_single.yaml \
  "Create a Python web scraper and save results to CSV"

# Multi-agent file collaboration
uv run python -m massgen.cli \
  --config massgen/configs/tools/filesystem/claude_code_context_sharing.yaml \
  "Generate a comprehensive project report with charts and analysis"

Configuration:

# Basic Workspace Setup:
agents:
  - id: "file-agent"
    backend:
      type: "claude_code"          # Backend with file support
      model: "claude-sonnet-4"     # Your model choice
      cwd: "workspace"             # Isolated workspace for file operations

# Multi-Agent Workspace Isolation:
agents:
  - id: "analyzer"
    backend:
      type: "claude_code"
      cwd: "workspace1"            # Agent-specific workspace

  - id: "reviewer"
    backend:
      type: "gemini"
      cwd: "workspace2"            # Separate workspace

orchestrator:
  snapshot_storage: "snapshots"              # Shared snapshots directory
  agent_temporary_workspace: "temp_workspaces" # Temporary workspace management

Available File Operations:

  • Claude Code: Built-in tools (Read, Write, Edit, MultiEdit, Bash, Grep, Glob, LS, TodoWrite)
  • Other Backends: Via MCP Filesystem Server

Workspace Management:

  • Isolated Workspaces: Each agent's cwd is fully isolated and writable
  • Snapshot Storage: Share workspace context between Claude Code agents
  • Temporary Workspaces: Agents can access previous coordination results

β†’ View more filesystem examples

⚠️ IMPORTANT SAFETY WARNING

MassGen agents can autonomously read, write, modify, and delete files within their permitted directories.

Before running MassGen with filesystem access:

  • Only grant access to directories you're comfortable with agents modifying
  • Use the permission system to restrict write access where needed
  • Consider testing in an isolated directory or virtual environment first
  • Back up important files before granting write access
  • Review the context_paths configuration carefully

The agents will execute file operations without additional confirmation once permissions are granted.

5. Project Integration & User Context Paths (NEW in v0.0.21)

Work directly with your existing projects! User Context Paths allow you to share specific directories with all agents while maintaining granular permission control. This enables secure multi-agent collaboration on your real codebases, documentation, and data.

MassGen automatically organizes all its working files under a .massgen/ directory in your project root, keeping your project clean and making it easy to exclude MassGen's temporary files from version control.

Project Integration Parameters:

Parameter Type Required Description
context_paths list Yes (for project integration) Shared directories for all agents
└─ path string Yes Absolute path to your project directory (must be directory, not file)
└─ permission string Yes Access level: "read" or "write"

⚠️ Important: Context paths must point to directories, not individual files. MassGen validates all paths during startup and will show clear error messages for missing paths or file paths.

Quick Start Commands:

# Multi-agent collaboration to improve the website in `massgen/configs/resources/v0.0.21-example
uv run python -m massgen.cli --config massgen/configs/tools/filesystem/gpt5mini_cc_fs_context_path.yaml "Enhance the website with: 1) A dark/light theme toggle with smooth transitions, 2) An interactive feature that helps users engage with the blog content (your choice - could be search, filtering by topic, reading time estimates, social sharing, reactions, etc.), and 3) Visual polish with CSS animations or transitions that make the site feel more modern and responsive. Use vanilla JavaScript and be creative with the implementation details."

Configuration:

# Basic Project Integration:
agents:
  - id: "code-reviewer"
    backend:
      type: "claude_code"
      cwd: "workspace"             # Agent's isolated work area

orchestrator:
  context_paths:
    - path: "/home/user/my-project/src"
      permission: "read"           # Agents can analyze your code
    - path: "/home/user/my-project/docs"
      permission: "write"          # Final agent can update docs

# Advanced: Multi-Agent Project Collaboration
agents:
  - id: "analyzer"
    backend:
      type: "gemini"
      cwd: "analysis_workspace"

  - id: "implementer"
    backend:
      type: "claude_code"
      cwd: "implementation_workspace"

orchestrator:
  context_paths:
    - path: "/home/user/legacy-app/src"
      permission: "read"           # Read existing codebase
    - path: "/home/user/legacy-app/tests"
      permission: "write"          # Write new tests
    - path: "/home/user/modernized-app"
      permission: "write"          # Create modernized version

This showcases project integration:

  • Real Project Access - Work with your actual codebases, not copies
  • Secure Permissions - Granular control over what agents can read/modify
  • Multi-Agent Collaboration - Multiple agents safely work on the same project
  • Context Agents (during coordination): Always READ-only access to protect your files
  • Final Agent (final execution): Gets the configured permission (READ or write)

Use Cases:

  • Code Review: Agents analyze your source code and suggest improvements
  • Documentation: Agents read project docs to understand context and generate updates
  • Data Processing: Agents access shared datasets and generate analysis reports
  • Project Migration: Agents examine existing projects and create modernized versions

Clean Project Organization:

your-project/
β”œβ”€β”€ .massgen/                          # All MassGen state
β”‚   β”œβ”€β”€ sessions/                      # Multi-turn conversation history (if using interactively)
β”‚   β”‚   └── session_20240101_143022/
β”‚   β”‚       β”œβ”€β”€ turn_1/                # Results from turn 1
β”‚   β”‚       β”œβ”€β”€ turn_2/                # Results from turn 2
β”‚   β”‚       └── SESSION_SUMMARY.txt    # Human-readable summary
β”‚   β”œβ”€β”€ workspaces/                    # Agent working directories
β”‚   β”‚   β”œβ”€β”€ agent1/                    # Individual agent workspaces
β”‚   β”‚   └── agent2/
β”‚   β”œβ”€β”€ snapshots/                     # Workspace snapshots for coordination
β”‚   └── temp_workspaces/               # Previous turn results for context
β”œβ”€β”€ massgen/
└── ...

Benefits:

  • βœ… Clean Projects - All MassGen files contained in one directory
  • βœ… Easy Gitignore - Just add .massgen/ to .gitignore
  • βœ… Portable - Move or delete .massgen/ without affecting your project
  • βœ… Multi-Turn Sessions - Conversation history preserved across sessions

Configuration Auto-Organization:

orchestrator:
  # User specifies simple names - MassGen organizes under .massgen/
  snapshot_storage: "snapshots"         # β†’ .massgen/snapshots/
  session_storage: "sessions"           # β†’ .massgen/sessions/
  agent_temporary_workspace: "temp"     # β†’ .massgen/temp/

agents:
  - backend:
      cwd: "workspace1"                 # β†’ .massgen/workspaces/workspace1/

β†’ Learn more about project integration

Security Considerations:

  • Agent ID Safety: Avoid using agent+incremental digits for IDs (e.g., agent1, agent2). This may cause ID exposure during voting
  • File Access Control: Restrict file access using MCP server configurations when needed
  • Path Validation: All context paths are validated to ensure they exist and are directories (not files)
  • Directory-Only Context Paths: Context paths must point to directories, not individual files

Additional Examples by Provider

Claude (Recursive MCP Execution - v0.0.20+)

# Claude with advanced tool chaining
uv run python -m massgen.cli \
  --config massgen/configs/tools/mcp/claude_mcp_example.yaml \
  "Research and compare weather in Beijing and Shanghai"

OpenAI (GPT-5 Series with MCP - v0.0.17+)

# GPT-5 with weather and external tools
uv run python -m massgen.cli \
  --config massgen/configs/tools/mcp/gpt5_mini_mcp_example.yaml \
  "What's the weather of Tokyo"

Gemini (Multi-Server MCP - v0.0.15+)

# Gemini with multiple MCP services
uv run python -m massgen.cli \
  --config massgen/configs/tools/mcp/multimcp_gemini.yaml \
  "Find accommodations in Paris with neighborhood analysis"    # (requires BRAVE_API_KEY in .env)

Claude Code (Development Tools)

# Professional development environment with auto-configured workspace
uv run python -m massgen.cli \
  --backend claude_code \
  --model sonnet \
  "Create a Flask web app with authentication"

# Default workspace directories created automatically:
# - workspace1/              (working directory)
# - snapshots/              (workspace snapshots)
# - temp_workspaces/        (temporary agent workspaces)

Local Models (LM Studio - v0.0.7+)

# Run open-source models locally
uv run python -m massgen.cli \
  --config massgen/configs/providers/local/lmstudio.yaml \
  "Explain machine learning concepts"

β†’ Browse by provider | Browse by tools | Browse teams

Additional Use Case Examples

Question Answering & Research:

# Complex research with multiple perspectives
uv run python -m massgen.cli \
  --config massgen/configs/basic/multi/gemini_4o_claude.yaml \
  "What's best to do in Stockholm in October 2025"

# Specific research requirements
uv run python -m massgen.cli \
  --config massgen/configs/basic/multi/gemini_4o_claude.yaml \
  "Give me all the talks on agent frameworks in Berkeley Agentic AI Summit 2025"

Creative Writing:

# Story generation with multiple creative agents
uv run python -m massgen.cli \
  --config massgen/configs/basic/multi/gemini_4o_claude.yaml \
  "Write a short story about a robot who discovers music"

Development & Coding:

# Full-stack development with file operations
uv run python -m massgen.cli \
  --config  massgen/configs/tools/filesystem/claude_code_single.yaml \
  "Create a Flask web app with authentication"

Web Automation: (still in test)

# Browser automation with screenshots and reporting
# Prerequisites: npm install @playwright/mcp@latest (for Playwright MCP server)
uv run python -m massgen.cli \
  --config massgen/configs/tools/code-execution/multi_agent_playwright_automation.yaml \
  "Browse three issues in https://github.com/Leezekun/MassGen and suggest documentation improvements. Include screenshots and suggestions in a website."

# Data extraction and analysis
uv run python -m massgen.cli \
  --config massgen/configs/tools/code-execution/multi_agent_playwright_automation.yaml \
  "Navigate to https://news.ycombinator.com, extract the top 10 stories, and create a summary report"

β†’ See detailed case studies with real session logs and outcomes

Interactive Mode & Advanced Usage

Multi-Turn Conversations:

# Start interactive chat (no initial question)
uv run python -m massgen.cli \
  --config massgen/configs/basic/multi/three_agents_default.yaml

# Debug mode for troubleshooting
uv run python -m massgen.cli \
  --config massgen/configs/basic/multi/three_agents_default.yaml \
  --debug "Your question"

Configuration Files

MassGen configurations are organized by features and use cases. See the Configuration Guide for detailed organization and examples.

Quick navigation:

See MCP server setup guides: Discord MCP | Twitter MCP

Backend Configuration Reference

For detailed configuration of all supported backends (OpenAI, Claude, Gemini, Grok, etc.), see:

β†’ Backend Configuration Guide

Interactive Multi-Turn Mode

MassGen supports an interactive mode where you can have ongoing conversations with the system:

# Start interactive mode with a single agent (no tool enabled by default)
uv run python -m massgen.cli --model gpt-5-mini

# Start interactive mode with configuration file
uv run python -m massgen.cli \
  --config massgen/configs/basic/multi/three_agents_default.yaml

Interactive Mode Features:

  • Multi-turn conversations: Multiple agents collaborate to chat with you in an ongoing conversation
  • Real-time coordination tracking: Live visualization of agent interactions, votes, and decision-making processes
  • Interactive coordination table: Press r to view complete history of agent coordination events and state transitions
  • Real-time feedback: Displays real-time agent and system status with enhanced coordination visualization
  • Clear conversation history: Type /clear to reset the conversation and start fresh
  • Easy exit: Type /quit, /exit, /q, or press Ctrl+C to stop

Watch the recorded demo:

MassGen Case Study

5. πŸ“Š View Results

The system provides multiple ways to view and analyze results:

Real-time Display

  • Live Collaboration View: See agents working in parallel through a multi-region terminal display
  • Status Updates: Real-time phase transitions, voting progress, and consensus building
  • Streaming Output: Watch agents' reasoning and responses as they develop

Watch an example here:

MassGen Case Study

Comprehensive Logging

All sessions are automatically logged with detailed information. The file can be viewed throught the interaction with UI.

Logging Storage Structure

Logging storage are organized in the following directory hierarchy:

massgen_logs/
└── log_{timestamp}/
    β”œβ”€β”€ agent_outputs/
    β”‚   β”œβ”€β”€ agent_id.txt
    β”‚   β”œβ”€β”€ final_presentation_agent_id.txt
    β”‚   └── system_status.txt
    β”œβ”€β”€ agent_id/
    β”‚   └── {answer_generation_timestamp}/
    β”‚       └── files_included_in_generated_answer
    β”œβ”€β”€ final_workspace/
    β”‚   └── agent_id/
    β”‚       └── {answer_generation_timestamp}/
    β”‚           └── files_included_in_generated_answer
    └── massgen.log / massgen_debug.log
Directory Structure Explanation
  • log_{timestamp}: Main log directory identified by session timestamp
  • agent_outputs/: Contains text outputs from each agent
    • agent_id.txt: Raw output from each agent
    • final_presentation_agent_id.txt: Final presentation for the selected agent
    • system_status.txt: System status information
  • agent_id/: Directory for each agent containing answer versions
    • {answer_generation_timestamp}/: Timestamp directory for each answer version
      • files_included_in_generated_answer: All relevant files in that version
  • final_workspace/: Final presentation for selected agents
    • agent_id/: Selected agent id
      • {answer_generation_timestamp}/: Timestamp directory for final presentation
        • files_included_in_generated_answer: All relevant files in final presentation
  • massgen.log or massgen_debug.log: Main log file, massgen.log for general logging, massgen_debug.log for verbose debugging information.
Important Note

The final presentation continues to be stored in each Claude Code Agent's workspace as before. After generating the final presentation, the relevant files will be copied to the final_workspace/ directory.

πŸ’‘ Case Studies

Case Studies

To see how MassGen works in practice, check out these detailed case studies based on real session logs:


πŸ—ΊοΈ Roadmap

MassGen is currently in its foundational stage, with a focus on parallel, asynchronous multi-agent collaboration and orchestration. Our roadmap is centered on transforming this foundation into a highly robust, intelligent, and user-friendly system, while enabling frontier research and exploration. An earlier version of MassGen can be found here.

⚠️ Early Stage Notice: As MassGen is in active development, please expect upcoming breaking architecture changes as we continue to refine and improve the system.

Recent Achievements (v0.0.32)

πŸŽ‰ Released: October 15, 2025

Docker Execution Mode

  • Container-Based Isolation: Secure command execution in isolated Docker containers preventing host filesystem access
  • Persistent State Management: Packages and dependencies persist across conversation turns eliminating redundant setup
  • Multi-Agent Support: Each agent receives dedicated isolated container enabling safe parallel execution
  • Configurable Security: Resource limits (CPU, memory), network isolation modes, and read-only volume mounts

MCP Architecture Refactoring

  • Simplified Client: Renamed MultiMCPClient to MCPClient reflecting streamlined architecture
  • Code Consolidation: Removed deprecated modules and consolidated duplicate MCP protocol handling
  • Improved Maintainability: Standardized type hints, enhanced error handling, cleaner code organization

Claude Code Docker Integration

  • Automatic Tool Management: Bash tool automatically disabled in Docker mode routing commands through execute_command
  • MCP Auto-Permissions: Automatic approval for MCP tools while preserving security validation
  • Enhanced Guidance: System messages prevent git repository confusion between host and container environments

Configuration and Testing

  • Docker Documentation: massgen/docker/README.md with setup guide and build scripts
  • Example Configurations: docker_simple.yaml, docker_multi_agent.yaml, docker_with_resource_limits.yaml, docker_claude_code.yaml, docker_verification.yaml
  • Testing: Comprehensive test suite validating Docker and local execution modes

Previous Achievements (v0.0.3 - v0.0.31)

βœ… Universal Command Execution (v0.0.31): MCP-based execute_command tool works across Claude, Gemini, OpenAI, and Chat Completions providers, AG2-inspired security with permission management and command filtering, code execution in planning mode for safer coordination

βœ… AG2 Group Chat Integration (v0.0.31): Multi-agent conversations using AG2's GroupChat and GroupChatManager frameworks, smart speaker selection (automatic, round-robin, manual) powered by LLMs, enhanced AG2 adapter supporting native group chat coordination

βœ… Audio & Video Generation (v0.0.31): Audio tools for text-to-speech and transcription, video generation using OpenAI's Sora-2 API, multimodal expansion beyond text and images

βœ… Multimodal Support Extension (v0.0.30): Audio and video processing for Chat Completions and Claude backends (WAV, MP3, MP4, AVI, MOV, WEBM formats), flexible media input via local paths or URLs, extended base64 encoding for audio/video files, configurable file size limits

βœ… Claude Agent SDK Migration (v0.0.30): Package migration from claude-code-sdk to claude-agent-sdk>=0.0.22, improved bash tool permission validation, enhanced system message handling

βœ… Qwen API Integration (v0.0.30): Added Qwen API provider to Chat Completions ecosystem with QWEN_API_KEY support, video understanding configuration examples

βœ… MCP Planning Mode (v0.0.29): Strategic planning coordination strategy for safer MCP tool usage, multi-backend support (Response API, Chat Completions, Gemini), agents plan without execution during coordination, 5 planning mode configurations

βœ… File Operation Safety (v0.0.29): Read-before-delete enforcement with FileOperationTracker class, PathPermissionManager integration with operation tracking methods, enhanced file operation safety mechanisms

βœ… AG2 Framework Integration (v0.0.28): Adapter system for external agent frameworks, AG2 ConversableAgent and AssistantAgent support with async execution, code execution in multiple environments (Local, Docker, Jupyter, YepCode), 4 ready-to-use AG2 configurations

βœ… Multimodal Support - Image Processing (v0.0.27): New stream_chunk module for multimodal content, image generation and understanding capabilities, file upload and search for document Q&A, Claude Sonnet 4.5 support, enhanced workspace multimodal tools

βœ… File Deletion and Workspace Management (v0.0.26): New MCP tools (delete_file, delete_files_batch, compare_directories, compare_files) for workspace cleanup and file comparison, consolidated _workspace_tools_server.py, enhanced path permission manager

βœ… Protected Paths and File-Based Context Paths (v0.0.26): Protect specific files within write-permitted directories, grant access to individual files instead of entire directories

βœ… Multi-Turn Filesystem Support (v0.0.25): Multi-turn conversation support with persistent context across turns, automatic .massgen directory structure, workspace snapshots and restoration, enhanced path permission system with smart exclusions, and comprehensive backend improvements

βœ… SGLang Backend Integration (v0.0.25): Unified vLLM/SGLang backend with auto-detection, support for SGLang-specific parameters like separate_reasoning, and dual server support for mixed vLLM and SGLang deployments

βœ… vLLM Backend Support (v0.0.24): Complete integration with vLLM for high-performance local model serving, POE provider support, GPT-5-Codex model recognition, backend utility modules refactoring, and comprehensive bug fixes including streaming chunk processing

βœ… Backend Architecture Refactoring (v0.0.23): Major code consolidation with new base_with_mcp.py class reducing ~1,932 lines across backends, extracted formatter module for better code organization, and improved maintainability through unified MCP integration

βœ… Workspace Copy Tools via MCP (v0.0.22): Seamless file copying capabilities between workspaces, configuration organization with hierarchical structure, and enhanced file operations for large-scale collaboration

βœ… Grok MCP Integration (v0.0.21): Unified backend architecture with full MCP server support, filesystem capabilities through MCP servers, and enhanced configuration files

βœ… Claude Backend MCP Support (v0.0.20): Extended MCP integration to Claude backend, full MCP protocol and filesystem support, robust error handling, and comprehensive documentation

βœ… Comprehensive Coordination Tracking (v0.0.19): Complete coordination tracking and visualization system with event-based tracking, interactive coordination table display, and advanced debugging capabilities for multi-agent collaboration patterns

βœ… Comprehensive MCP Integration (v0.0.18): Extended MCP to all Chat Completions backends (Cerebras AI, Together AI, Fireworks AI, Groq, Nebius AI Studio, OpenRouter), cross-provider function calling compatibility, 9 new MCP configuration examples

βœ… OpenAI MCP Integration (v0.0.17): Extended MCP (Model Context Protocol) support to OpenAI backend with full tool discovery and execution capabilities for GPT models, unified MCP architecture across multiple backends, and enhanced debugging

βœ… Unified Filesystem Support with MCP Integration (v0.0.16): Complete FilesystemManager class providing unified filesystem access for Gemini and Claude Code backends, with MCP-based operations for file manipulation and cross-agent collaboration

βœ… MCP Integration Framework (v0.0.15): Complete MCP implementation for Gemini backend with multi-server support, circuit breaker patterns, and comprehensive security framework

βœ… Enhanced Logging (v0.0.14): Improved logging system for better agents' answer debugging, new final answer directory structure, and detailed architecture documentation

βœ… Unified Logging System (v0.0.13): Centralized logging infrastructure with debug mode and enhanced terminal display formatting

βœ… Windows Platform Support (v0.0.13): Windows platform compatibility with improved path handling and process management

βœ… Enhanced Claude Code Agent Context Sharing (v0.0.12): Claude Code agents now share workspace context by maintaining snapshots and temporary workspace in orchestrator's side

βœ… Documentation Improvement (v0.0.12): Updated README with current features and improved setup instructions

βœ… Custom System Messages (v0.0.11): Enhanced system message configuration and preservation with backend-specific system prompt customization

βœ… Claude Code Backend Enhancements (v0.0.11): Improved integration with better system message handling, JSON response parsing, and coordination action descriptions

βœ… Azure OpenAI Support (v0.0.10): Integration with Azure OpenAI services including GPT-4.1 and GPT-5-chat models with async streaming

βœ… MCP (Model Context Protocol) Support (v0.0.9): Integration with MCP for advanced tool capabilities in Claude Code Agent, including Discord and Twitter integration

βœ… Timeout Management System (v0.0.8): Orchestrator-level timeout with graceful fallback and enhanced error messages

βœ… Local Model Support (v0.0.7): Complete LM Studio integration for running open-weight models locally with automatic server management

βœ… GPT-5 Series Integration (v0.0.6): Support for OpenAI's GPT-5, GPT-5-mini, GPT-5-nano with advanced reasoning parameters

βœ… Claude Code Integration (v0.0.5): Native Claude Code backend with streaming capabilities and tool support

βœ… GLM-4.5 Model Support (v0.0.4): Integration with ZhipuAI's GLM-4.5 model family

βœ… Foundation Architecture (v0.0.3): Complete multi-agent orchestration system with async streaming, builtin tools, and multi-backend support

βœ… Extended Provider Ecosystem: Support for 15+ providers including Cerebras AI, Together AI, Fireworks AI, Groq, Nebius AI Studio, and OpenRouter

Key Future Enhancements

  • Bug Fixes & Backend Improvements: Fixing image generation path issues and adding Claude multimodal support
  • Advanced Agent Collaboration: Exploring improved communication patterns and consensus-building protocols to improve agent synergy
  • Expanded Model Integration: Adding support for more frontier models and local inference engines
  • Improved Performance & Scalability: Optimizing the streaming and logging mechanisms for better performance and resource management
  • Enhanced Developer Experience: Completing tool registration system and web interface for better visualization

We welcome community contributions to achieve these goals.

v0.0.33 Roadmap

Version 0.0.33 focuses on developer experience improvements and PyPI distribution:

Required Features

  • Configuration Builder CLI: Interactive command-line tool for generating and validating MassGen configurations
  • PyPI Package Release v0.1.0: Official PyPI package representing new usage paradigm with easy pip installation

Optional Features

  • Nested Chat Integration: Complete AG2 nested chat pattern support for hierarchical agent conversations
  • DSPy Integration: Framework integration for prompt optimization and systematic agent improvement

Key technical approach:

  • Config Builder: Interactive prompts, configuration templates, validation and error checking, preset support
  • PyPI Package: Production-ready distribution with proper dependencies, documentation, and migration guide
  • Nested Chat: AG2 integration for hierarchical conversations with parent-child agent relationships
  • DSPy: Prompt optimization, performance tuning, and systematic agent improvement workflows

For detailed milestones and technical specifications, see the full v0.0.33 roadmap.


🀝 Contributing

We welcome contributions! Please see our Contributing Guidelines for details.


πŸ“„ License

This project is licensed under the Apache License 2.0 - see the LICENSE file for details.


⭐ Star this repo if you find it useful! ⭐

Made with ❀️ by the MassGen team

⭐ Star History

Star History Chart

About

πŸš€ MassGen: An Open-source Multi-Agent Scaling System Inspired by Grok Heavy and Gemini Deep Think. Join the discord channel: https://discord.com/invite/VVrT2rQaz5

Topics

Resources

License

Contributing

Stars

Watchers

Forks

Packages

No packages published

Languages