DISCOVER THE FUTURE OF AI AGENTS

IronClaw

Added May 4, 2026
Agent & Tooling
Open Source
RustModel Context ProtocolAI AgentsCLIAgent & ToolingProtocol, API & IntegrationSecurity & Privacy

An Agent OS focused on privacy, security and extensibility, providing an always-available personal AI assistant through WASM-sandboxed execution, multi-channel access, and persistent memory.

IronClaw is an open-source Agent OS maintained by the nearai organization, primarily written in Rust (86.1%), positioned as a security-first personal AI assistant platform. Its core design revolves around three pillars:

Security First: Untrusted tools execute in isolated WASM sandboxes with a capability-based permission model. Credentials are injected at the host boundary and never exposed to tool code, with leak detection enabled. Built-in prompt injection defense supports four policy levels (Block/Warn/Review/Sanitize). HTTP endpoint allowlists, per-tool rate limiting, and resource constraints form a defense-in-depth strategy. The security pipeline follows: WASM → Allowlist Validator → Leak Scan(request) → Credential Injector → Execute → Leak Scan(response) → WASM.

Always Available: Multi-channel access via REPL, HTTP Webhook, WASM Channels (Telegram/Slack/Discord), and Web Gateway with SSE/WebSocket real-time streaming. Docker sandboxes provide container-level isolation with orchestrator/worker mode and per-task tokens. The Routines Engine supports Cron scheduling, event triggers, and Webhook handling. A heartbeat system enables proactive background monitoring, while parallel jobs and self-healing mechanisms ensure service continuity.

Self-Expanding & Persistent Memory: Dynamically builds WASM tools from natural language descriptions. Connects to external capability servers via MCP protocol. Plugin architecture enables plug-and-play without restarts. The persistent memory layer uses PostgreSQL + pgvector with hybrid full-text and vector search (Reciprocal Rank Fusion), while workspace filesystem and identity files maintain cross-session context.

IronClaw unifies 15+ LLM backends including Anthropic, OpenAI, Google Gemini (OAuth, no API key), Ollama (local inference), AWS Bedrock, MiniMax, and GitHub Copilot, with OpenAI-compatible endpoint support. All data is stored locally with AES-256-GCM encryption, zero telemetry, and no data sharing.

The architecture follows a layered design: Channels → Agent Loop → Scheduler/Routines Engine → Workers/Orchestrator → Tool Registry. Core components include Agent Loop (message processing & job coordination), Router (intent classification), Scheduler (parallel job scheduling), Worker (LLM inference & tool calls), Orchestrator (container lifecycle management), Web Gateway (browser UI), Routines Engine (background tasks), Workspace (persistent memory & hybrid search), and Safety Layer (prompt injection defense & content sanitization).

Dual-licensed under Apache-2.0 / MIT. Latest version v0.27.0 (2026-04-29), with 30 releases and 1,336+ commits. Installation via prebuilt binaries, shell scripts, Homebrew, Windows Installer, or source compilation. Prerequisites: Rust 1.92+, PostgreSQL 15+ (with pgvector extension), and a NEAR AI account. Engine v2 is a next-generation engine, opt-in via ENGINE_V2=true, introducing capability state vocabularies and normalized tool permissions.

Note: The website https://www.ironclaw.com listed in the README has not been verified for current accessibility or active status.

Related Projects

View All

STAY UPDATED

Get the latest AI tools and trends delivered straight to your inbox. No spam, just intelligence.