A secure, extensible Rust runtime framework for vertical AI agents, featuring 42+ LLM providers, 25+ delivery channels, and an L0–L9 layered security governance model.
Loong (also known as LoongClaw) is a security-first AI agent runtime framework built in Rust, structured as an 8-crate workspace with a strict acyclic dependency graph. On the LLM access layer, it integrates 42+ providers (OpenAI, Volcengine, BytePlus Coding, Xiaomi, etc.) with automatic model discovery and manual pinning. On the delivery layer, it offers 25+ channels (Feishu/Lark, WeCom, Telegram, WhatsApp, Teams, etc.), with deep Feishu integration covering QR code registration, WebSocket long-connections, and document/bitable/calendar operations.
Security governance is Loong's core differentiator: the L0–L9 layered model enforces policy constraints from kernel ABI contracts (L0) through bootstrap lifecycle (L9). Tool invocations require policy engine approval, with support for human approval gates, capability token lifecycle management, highest-priority denylist interception, and JSONL SIEM audit log export (with optional fail-closed mode).
For extensibility, Loong provides a manifest-first plugin system supporting WASM and process bridge execution modes, hot-plug/hot-fix workflows, and multi-language plugin IR. The built-in tool system includes shell execution, file read/write/edit, and web search, paired with SQLite-backed memory storage and semantic retrieval extensions for workspace memory persistence. The session layer supports delegate sub-task orchestration, trajectory tracking, and background task management.
On code quality, the workspace globally denies unsafe_code, unwrap_used, expect_used, panic, and similar patterns, with a strict clippy deny list. Current version is 0.1.2-alpha.1 under the MIT license, in early rapid iteration. Supports Linux, macOS, Windows (including Android Termux builds).