A bio-inspired memory layer for AI agents modeled on the hippocampus, featuring decay-by-default, retrieval strengthening, and sleep consolidation for cross-session, cross-tool consistent memory.
Hippo Memory is a bio-inspired memory layer for AI agents built on the Complementary Learning Systems (CLS) theory from hippocampal neuroscience. It implements a three-tier memory architecture (Buffer → Episodic → Semantic) with seven core mechanisms: default exponential decay, retrieval strengthening, emotion-tag-weighted decay, sleep consolidation (decay + merge + prune + deduplicate), schema-accelerated integration, and conflict detection.
The Persistent Goal Stack (dlPFC, v0.38+) supports full goal lifecycle management with goal-conditioned retrieval augmentation, where completion results automatically propagate to associated memory weights. The continuity system — Active Task Snapshots, Session Event Trails, and Session Handoffs — enables seamless long-task recovery across sessions, with hippo recall --continuity returning complete context in a single call.
Built on SQLite local storage with zero runtime dependencies and optional @xenova/transformers local embeddings, it exposes three interfaces: CLI, MCP Server, and HTTP API. Auto-detects and injects hooks into agent frameworks (Claude Code, Cursor, Codex, etc.). Enterprise-grade security includes multi-tenant isolation (scrypt-hashed API keys), GDPR Right-to-Be-Forgotten (single-API-call RTBF), audit logs, and explicit scope filtering. Supports Slack webhook real-time ingestion and backfill, plus bulk import from ChatGPT, CLAUDE.md, .cursorrules, Markdown, and Git history.
Benchmark results: Sequential Learning Benchmark trap rate reduced from 78% to 14%; LongMemEval R@5 at 74.0% (BM25-only); Slack incident scenario 10/10 vs. transcript replay. 926 tests, all against real SQLite databases with zero mocks.