A Python tool to orchestrate multiple AI coding assistants (Claude, Copilot, Cursor, Gemini, Cline, Windsurf) for multi-agent development workflows.
AI-Coding-Orchestrator automates the coordination of multiple AI coding assistants within a git repository. It intelligently assigns tasks to appropriate agents based on their strengths, generates tailored configuration files, manages parallel development workflows using git worktrees, and ensures quality through rigorous validation and testing.
- Intelligent Task Assignment: Automatically assigns tasks to AI agents based on capabilities
- Configuration Generation: Creates agent-specific configs (CLAUDE.md, .cursorrules, etc.)
- Git Worktree Management: Parallel execution with isolated environments
- Quality Validation: Test coverage, static analysis, security scanning
- MCP Integration: Model Context Protocol support for enhanced coordination
- Self-Validation Loop: Agents verify their own work automatically
- Python 3.11 or higher
- Poetry (for dependency management)
- Git
# Clone the repository
git clone https://github.com/yourusername/ai-coding-orchestrator.git
cd ai-coding-orchestrator
# Install dependencies with Poetry
poetry install
# Activate the virtual environment
poetry shell
# Verify installation
orchestrator --version# Install with dev dependencies
poetry install --with dev
# Install pre-commit hooks
pre-commit install
# Run tests
pytest
# Run tests with coverage
pytest --cov
# Run type checking
mypy src/
# Run linting
flake8 src/
pylint src/# Initialize a new orchestration project
orchestrator init
# Parse a DEVELOPMENT_PLAN.md file
orchestrator parse DEVELOPMENT_PLAN.md
# Assign agents to tasks
orchestrator assign DEVELOPMENT_PLAN.md
# Generate agent configuration files
orchestrator generate --name "My Project" --tech-stack python --tech-stack react
# Full orchestration workflow
orchestrator orchestrate DEVELOPMENT_PLAN.mdai-coding-orchestrator/
├── src/orchestrator/ # Main package
│ ├── cli.py # Command-line interface
│ ├── parser/ # Plan parsing
│ ├── classifier/ # Task classification
│ ├── config/ # Config generation
│ ├── worktree/ # Git worktree management
│ ├── validation/ # Quality validation
│ ├── comparison/ # Implementation comparison
│ ├── execution/ # Agent execution
│ ├── review/ # Human review workflows
│ ├── mcp/ # MCP integration
│ ├── agentic/ # Self-validation loop
│ ├── plugins/ # Plugin system
│ ├── analytics/ # Performance analytics
│ ├── data/ # Agent capability matrix, taxonomy
│ ├── templates/ # Jinja2 templates
│ └── schemas/ # JSON schemas
├── tests/ # Test suite
├── docs/ # Documentation
├── CLAUDE.md # Claude Code instructions
├── DEVELOPMENT_PLAN.md # Comprehensive development plan
├── IMPLEMENTATION_ROADMAP.md # Pragmatic 22-week roadmap
└── pyproject.toml # Poetry configuration
- CLAUDE.md - Instructions for Claude Code
- DEVELOPMENT_PLAN.md - Comprehensive 34-week development plan
- IMPLEMENTATION_ROADMAP.md - Pragmatic 22-week implementation roadmap
- multi-agent-coding.md - Research on AI coding assistants
Current Phase: Phase 2 - Git Worktree Automation (Weeks 7-10)
Phase 0 - Foundation (Weeks 1-2): ✅ COMPLETE
- Project structure created
- Pre-commit hooks configured
- GitHub Actions CI setup
- Initial test structure
- Agent capability matrix (agents.yaml)
- Task classification taxonomy (task_taxonomy.yaml)
- JSON schema for plan validation
Phase 1 Week 3 - Markdown Parser: ✅ COMPLETE
- Core data models (Task, Stage, Phase, Plan)
- Plan parser implementation (97.59% coverage)
- Comprehensive test suite (20/20 tests passing)
- Extracts: phases, stages, tasks, objectives, deliverables, validation criteria
Phase 1 Week 4 - Task Classifier & Agent Assignment: ✅ COMPLETE
- Task classification by keyword matching (15 task types)
- Complexity estimation algorithm (simple/medium/complex)
- Agent assignment engine with multi-factor scoring
- Manual override mechanism with validation
- Tech stack detection
- Comprehensive test suite (33/33 tests passing, 97%+ coverage)
Phase 1 Week 5-6 - Configuration Generation & CLI: ✅ COMPLETE
- Jinja2-based configuration generator
- Templates for all 6 agents (Claude, Cursor, Copilot, Gemini, Cline, Windsurf)
- AGENTS.md universal coordination file
- CLI commands: parse, assign, generate, orchestrate
- End-to-end workflow integration
- Comprehensive test suite (18/18 tests passing, 100% config coverage)
Phase 1 Status: ✅ COMPLETE - MVP working CLI tool ready!
Phase 2 Week 7 - Git Worktree Automation: ✅ COMPLETE
- WorktreeManager class for parallel agent execution
- Automated branch creation (agent/{agent-name}/{task-id})
- Port allocation and environment isolation
- Safety features (uncommitted changes detection)
- Comprehensive test suite (14/14 tests passing, 87.95% coverage)
- All 99 tests passing with 97.41% overall coverage
Phase 2 Week 8 - Parallel Execution Coordinator: ✅ COMPLETE
- ExecutionCoordinator with async parallel execution
- AgentExecution dataclass for state tracking
- Configurable concurrency limits with Semaphore pattern
- Timeout and failure handling
- PromptGenerator with agent-specific instructions
- Real-time progress monitoring and execution summaries
- CLI integration: execute, cleanup, status commands
- Comprehensive test suite (30/30 tests passing, execution module 96.75-100% coverage)
- All 129 tests passing with 89.99% overall coverage
Phase 2 Week 9 - Comparison Engine: ✅ COMPLETE
- QualityMetrics dataclass with weighted scoring (coverage 30%, test pass 25%, static analysis 20%, complexity 15%, critical issues 10%)
- MetricsCollector for gathering coverage, complexity, line count, test results, static analysis
- DiffAnalyzer for code comparison and similarity calculation
- ComparisonEngine with multi-implementation analysis
- ComparisonReport with best implementation selection
- Merge recommendation logic with confidence scoring (3 factors: quality, gap, tests)
- Minimum quality threshold (70.0) for automatic recommendations
- Comprehensive test suite (41/41 tests passing, comparison module 87-99% coverage)
- All 170 tests passing across the project
See IMPLEMENTATION_ROADMAP.md for detailed progress.
Contributions are welcome! Please read CONTRIBUTING.md for guidelines.
- Fork the repository
- Create a feature branch (
git checkout -b feature/amazing-feature) - Commit your changes (
git commit -m 'Add amazing feature') - Push to the branch (
git push origin feature/amazing-feature) - Open a Pull Request
# Run all tests
pytest
# Run with coverage report
pytest --cov --cov-report=html
# Run specific test file
pytest tests/test_cli.py
# Run with verbose output
pytest -vThis project enforces strict code quality standards:
- Type Checking: mypy in strict mode
- Formatting: black (line length: 100)
- Import Sorting: isort (black-compatible)
- Linting: flake8, pylint
- Security: bandit
- Test Coverage: ≥85% required
This project is licensed under the Apache License 2.0 - see the LICENSE file for details.
- Research on multi-agent AI coding patterns: multi-agent-coding.md
- Inspired by the need to coordinate multiple AI coding assistants effectively
- Built with modern Python tooling: Poetry, pytest, mypy, black
For issues, questions, or contributions, please:
- Open an issue on GitHub
- Check existing documentation
- Review the roadmap and development plan
See IMPLEMENTATION_ROADMAP.md for the detailed 22-week implementation plan.
Upcoming Milestones:
- Phase 1 (Weeks 3-6): Core Engine MVP with working CLI
- Phase 2 (Weeks 7-10): Git Worktree Automation for parallel execution
- Phase 3 (Weeks 11-14): Quality & Validation with safety gates
- Phase 4 (Weeks 15-17): MCP Integration and agentic loops
- Phase 5 (Weeks 18-22): Polish, extensibility, and v1.0 release