Secure, isolated microVM sandbox infrastructure for AI agents, powered by Firecracker/QEMU with hardware-level isolation and sub-second startup, supporting safe code execution, browser automation, snapshot state management, and coding agent integration.
SmolVM is an AI agent sandbox infrastructure developed by Celesto AI (London), designed to let AI agents safely execute code, operate browsers, and accomplish real tasks in isolated environments. The project leverages KVM-backed microVM technology, using Firecracker on Linux and QEMU on macOS as virtualization backends, with each sandbox running an independent kernel for stronger security boundaries than containers.
Performance-wise, SmolVM achieves sub-second startup — approximately 572ms for create+start, 2.1s for SSH readiness, 43ms per command execution, and a full lifecycle (create → execute → teardown) of approximately 3.5 seconds, meeting the high-frequency, short-lifecycle calling patterns of AI agents. The project is designed to handle thousands of concurrent sandbox instances.
Core capabilities span six dimensions: Safe Execution — running LLM-generated Python/Shell code in isolated microVMs to prevent malicious code escape; Browser Sandbox — agents can navigate websites, fill forms, take screenshots, with real-time visualization via noVNC (--live mode, localhost:6080); Filesystem Isolation — read/write host directory mounting for secure codebase exploration; Network Control — egress domain whitelisting for precise outbound access control, with independent TAP devices and private IPs (172.16.0.0/16) per sandbox; State Persistence — snapshot mechanism for cross-session state preservation; Coding Agent Integration — one-click launch of sandboxes pre-installed with Claude Code, Codex, or Pi.
Architecturally, SmolVM uses a layered design: the underlying smolvm-core (Rust, PyO3 extension, edition 2024, rust-version 1.85) handles virtualization operations, images are built on Alpine Linux with zstd compression, the network layer routes outbound through NAT with domain-level filtering, commands execute through SSH (paramiko), and state metadata is stored in SQLite (~/.local/state/smolvm/smolvm.db). It provides three interaction modes — Python SDK, CLI, and an optional FastAPI + WebSocket Dashboard — with integration examples for OpenAI Agents, LangChain, and PydanticAI.
Resource parameter ranges: vCPUs 1–32 (default 1), memory 128–16384 MiB (default 512 MiB), disk from 64 MiB (default 512 MiB). Requires Python >= 3.10 (supports 3.10–3.14). The project is open-sourced under Apache-2.0, currently at v0.0.13 (25 releases, 214 commits, 12 contributors).
Unconfirmed items: Windows host support not mentioned; cloud-hosted/SaaS service details unclear; OpenClaw's specific definition and standalone project status unknown; production-scale deployment capacity benchmarks missing.