DISCOVER THE FUTURE OF AI AGENTS

Maestro

Added Apr 23, 2026
Agent & Tooling
Open Source
Workflow AutomationMulti-Agent SystemModel Context ProtocolAI AgentsAgent FrameworkCLIAgent & ToolingDeveloper Tools & CodingAutomation, Workflow & RPASecurity & Privacy

A multi-agent development orchestration platform spanning four AI coding CLI runtimes, offering 39 expert agents and dual-path workflows.

Positioning#

Maestro is a multi-agent development orchestration platform spanning four AI coding CLI runtimes: Gemini CLI, Claude Code, OpenAI Codex, and Qwen Code. It does not include LLM inference capabilities itself—it relies on supported CLIs as underlying execution engines, positioning itself at the orchestration layer to solve workflow fragmentation and lack of systematic multi-agent collaboration across different AI coding assistants.

Core Capabilities#

  • 39 Expert Agents: Covering frontend, backend, full-stack, DevOps, security, database, ML/AI, cloud architecture, platform engineering, SRE, observability, release management, mobile, and legacy systems (COBOL, DB2, z/OS, HLASM, IBM i).
  • Independent Quality Gates: 7 review tools callable independently of the orchestration flow—review, debug, security-audit, perf-check, seo-audit, a11y-audit, compliance-check.
  • MCP Server: 17 built-in MCP tool packages providing session management, design gates, phase reconciliation, and other low-level capabilities.
  • Session Persistence: Sessions archived in docs/maestro/ with Resume, Status, and Archive operations for recovering long-running orchestration tasks after interruption.

Workflow Modes#

  • Express (Fast Path): Simple tasks — 1-2 clarification questions → brief → delegate to single expert → code review → archive.
  • Standard (Standard Path): Medium/complex tasks — Design → Plan → Execute → Complete four phases with explicit approval gates; final review blocks unresolved Critical/Major findings.

Architecture#

Adopts a src-first, generated-runtime architecture: all shared behavior and content are written once under src/, then a scripts/generate.js generator pipeline produces runtime-specific adapter files (TOML commands for Gemini CLI, Markdown skills for Claude Code, plugin skills for Codex, Gemini CLI-compatible extensions for Qwen Code).

  • Manifest System: src/manifest.js declares source-to-runtime output mapping rules with glob pattern and transform pipeline support.
  • 6 Transformers: parse-frontmatter, extract-examples, rebuild-frontmatter, agent-stub, skill-discovery-stub, skill-metadata.
  • Design Gate: Server-enforced design approval flow; MCP tool create_session rejects session creation when design is unapproved.
  • Entry Point Registry: 9 entry points + 3 core commands defined in src/entry-points/, generated to each runtime's command/skill surface.
  • Engineering: Node.js built-in test runner (node --test) with c8 coverage; Git Hooks installed via scripts/install-git-hooks.js.

Installation & Usage#

Requirements: Node.js ≥ 20, plus any supported AI CLI.

Gemini CLI: gemini extensions install https://github.com/josstei/maestro-orchestrate Claude Code: claude plugin marketplace add josstei/maestro-orchestrate then claude plugin install maestro@maestro-orchestrator --scope user Codex: codex plugin marketplace add josstei/maestro-orchestrate Qwen Code: qwen extensions install https://github.com/josstei/maestro-orchestrate

Gemini CLI and Qwen Code require enabling experimental sub-agents in settings: {"experimental": {"enableAgents": true}}

Command Reference#

12 entry commands: orchestrate (auto-classify task and select workflow), execute (skip orchestration), resume, status, archive, review, debug, security-audit, perf-check, seo-audit, a11y-audit, compliance-check. Claude Code/Gemini CLI/Qwen Code use /maestro:<command> format; Codex uses $maestro:<command> format.

Related Projects

View All

STAY UPDATED

Get the latest AI tools and trends delivered straight to your inbox. No spam, just intelligence.