A Python-based terminal AI programming assistant featuring multi-agent collaboration, cross-session persistent memory, plan-first workflow, and MCP/ACP tool integration.
Invincat CLI is a terminal-based AI programming assistant built on DeepAgents CLI and LangChain/LangGraph, currently at version 0.0.34 in Beta development status.
Multi-Agent Runtime Architecture#
The project employs a multi-agent runtime with clear role boundaries:
- Main Agent: End-to-end task execution—file I/O, command running, MCP tool invocation, subtask coordination—but cannot directly read/write memory_user.json / memory_project.json.
- Planner Agent (
/plan): Read-only analysis of requirements, generating executable todo lists for user approval before handoff to Main Agent. No implementation actions. - Memory Agent: Asynchronously extracts persistent memories after each turn, performing scoring and operations (create/update/rescore/retier/archive/delete/noop). Conservative extraction, skipping low-confidence or temporary facts.
- Local Subagents: Parallel processing of bounded in-process subtasks, operating only within delegated scope.
- Async Subagents: Asynchronous offloading of long-running or remote tasks, treated as delegated workers.
Memory System#
- JSON Single Source of Truth: Runtime relies solely on
memory_user.json(user-level cross-project preferences) andmemory_project.json(project-level repository conventions). - Dual-Scope Isolation: User-level and project-level strictly separated.
- Decoupled Read/Write Pipelines:
RefreshableMemoryMiddlewarefor loading/injection;MemoryAgentMiddlewarefor extraction/writing, running inaafter_agentpost-turn async middleware without blocking interaction latency. - Incremental Extraction + Fallback: Only consumes incremental messages since last success; falls back to full processing on history rewrites.
- Evidence-Aware: Prioritizes persistent conventions based on tool evidence (read_file, edit_file, execute, etc.).
- Deterministic Stale Fact Cleanup: Removes outdated or contradictory active memories via rule validation.
- Strong Write Safety: Schema validation, deduplication/conflict protection, path whitelisting, atomic writes (tmp + os.replace).
- Visual Management:
/memoryfor full-screen inspection and management of both scopes.
Context Management#
- Micro-Compaction: Runs automatically before each model invocation, pure algorithm (no LLM), <1ms; groups by "tool call group", preserves dynamic recent window, applies two-level compression on old large tool outputs.
- Auto-Compaction: Automatically compresses old messages to summaries when context window usage exceeds 80%.
- Manual Compaction:
/offloador/compactcommands.
Plan-First Mode#
/plan enters planning mode → Planner Agent read-only analysis → generates todo list → user approval → checklist handed off to Main Agent. /exit-plan cancels in-progress planning rounds. Planner mode uses visible tool filtering + runtime allowlist dual execution for safety.
Safety & Approval#
Shell/file/network operations require user confirmation by default; Shift+Tab toggles auto-approval mode (status bar shows AUTO); memory storage files protected by middleware, updated only through the memory pipeline.
Skill System#
Predefined workflow templates supporting explicit invocation via /skill:<name> [args], with SkillsMiddleware auto-matching and applying skills during normal execution.
MCP & ACP Support#
MCP tool integration via langchain-mcp-adapters, ACP (Agent Communication Protocol) integration via deepagents-acp>=0.0.4. Commit history also indicates support for WeCom inbound media message bridging.
Model Configuration#
Dual-model mechanism: Primary model (main conversation execution) + Memory model (dedicated memory extraction), with automatic fallback when Memory model is unconfigured. Supported providers include Anthropic (claude-sonnet-4-6, claude-opus-4-7), OpenAI (gpt-4o, o3), Google GenAI (gemini-2.0-flash, gemini-2.5-pro), OpenRouter (all models), and OpenAI-compatible APIs (DeepSeek, Zhipu, local Ollama, etc. via base_url setting).
Installation & Usage#
pip install invincat-cli
cd ~/my-project
invincat-cli
# Run /model after first launch to configure model and API Key
Requirements: Python ≥3.11 (supports 3.11/3.12/3.13), cross-platform.
Unconfirmed Information#
- DeepAgents CLI ecosystem relationship:
deepagents==0.4.11is a core dependency, but upstream repo/docs not deeply verified. - ACP protocol specifics and use cases not detailed in README.
- WeCom bridging feature completeness and usage unclear.
- Skill system built-in skill list not provided.
- PyPI publication status not directly verified.