A local-first multi-agent collaborative office where AI role teams autonomously build and maintain a shared knowledge base, executing tasks via push-driven orchestration without losing context.
WUPHF is a collaborative runtime for AI agent teams, placing multiple role-based agents (CEO, PM, Engineer, Designer, etc.) in a unified "virtual office." The core architecture is built on a push-driven message broker where agents are only woken upon receiving messages, resulting in zero idle cost. Each agent runs in an isolated git worktree, uses a fresh session per turn, and achieves ~97% cache hit rate via Anthropic prompt cache — reducing token consumption by approximately 7× compared to cumulative orchestrators.
For knowledge management, each agent has a private Notebook for raw context and tentative conclusions, and can autonomously promote verified content to a shared team Wiki. The default markdown Wiki backend builds a knowledge graph on a local git repository, supporting triple-based fact storage, /lookup reference retrieval, and /lint contradiction detection. The system supports four execution backends — Claude Code, Codex CLI, OpenClaw, and local LLMs via OpenCode — with mixed-provider agents allowed in the same channel. The MCP tool surface is dynamically pruned per agent role and collaboration mode to further optimize prompt size and cache efficiency.
Built primarily in Go, WUPHF ships as a single binary for cross-platform use (macOS/Linux/Windows) with a built-in Web UI and an optional tmux TUI. A Telegram Bridge enables bidirectional message bridging, and the Skills ecosystem supports publishing and installing agent skills from public marketplaces. All data is stored locally with no SaaS dependency — users bring their own API keys.
Core Features#
- Multi-Agent Collaboration: Built-in CEO, PM, ENG, DSG, CMO, CRO role agents with Collaborative Mode (broadcast) and Focus Mode (CEO-routed delegation), @mention communication between agents
- Knowledge Management: Two-tier Notebook (private) → Wiki (shared) system; markdown backend supports knowledge graphs, triple facts, contradiction detection
- LLM Runtime: Multi-provider support, fresh session strategy + prompt cache, dynamically pruned MCP tool surface per role/mode, ~87k input tokens/turn measured
- Collaboration & Sharing:
wuphf shareinvites co-founders via Tailscale/WireGuard private networks; Skills ecosystem connects to Anthropic marketplace, Lobehub, etc. - External Integrations: Telegram Bridge (bidirectional), OpenClaw Bridge, Composio cloud-hosted OAuth
Installation#
npx wuphf # Recommended
npm install -g wuphf && wuphf # Global install
go build -o wuphf ./cmd/wuphf # Build from source
Available packs: founding-team, coding-team, starter, lead-gen-agency, revops
Architecture Overview#
Human → Web UI / TUI / 1:1 → Broker (pub/sub + queue) ← Nex / Telegram / Composio
│
push on message
▼
Per-agent headless runners
(Claude Code / Codex, fresh session per turn, scoped MCP)
│
▼
isolated git worktree per agent
Three key design decisions: fresh sessions to avoid context bloat, role-based MCP tool pruning for cache optimization, push-driven broker for zero idle cost. Current version v0.108.1, pre-1.0 stage, main branch updated daily, MIT licensed.