DISCOVER THE FUTURE OF AI AGENTS

Neuro SAN Studio

Added May 8, 2026
Agent & Tooling
Open Source
Workflow AutomationLarge Language ModelsMulti-Agent SystemModel Context ProtocolAI AgentsAgent FrameworkWeb ApplicationAgent & ToolingModel & Inference FrameworkAutomation, Workflow & RPAProtocol, API & Integration

An interactive design and deployment platform for multi-agent networks based on declarative HOCON configuration and the AAOSA decentralized protocol

Neuro SAN Studio is a multi-agent network orchestration platform developed by Cognizant AI Lab under the Apache-2.0 license. Built primarily in Python (requiring 3.12 or 3.13), the project is at version 0.2.28. It adopts a layered architecture where the core library neuro-san can be used independently via pip install neuro-san, while the Studio layer provides a full Web UI, examples, and toolchains.

The platform's core design philosophy is data-driven: the entire multi-agent network is defined through declarative HOCON configuration files, lowering the barrier for non-technical participants. Orchestration is based on the AAOSA (Adaptive Agent-Oriented Software Architecture) protocol for decentralized self-organization, where agents route user queries via the Frontman pattern and autonomously delegate subtasks, supporting linear, hierarchical, and DAG topologies. For security, the Sly-Data mechanism ensures sensitive data passes through private channels without exposure to LLM chat flows, with optional OpenFGA integration for fine-grained authorization.

For tool integration, the CodedTools interface weaves LLM reasoning with Python deterministic tools, supports LangChain tool adapters, and connects to external agent ecosystems including Agentforce, Agentspace, CrewAI, MCP, and A2A. Each Neuro SAN server can also operate as an MCP Server. The platform is compatible with multiple LLM providers (OpenAI, Anthropic, Azure, Ollama, etc.) with per-agent model assignment flexibility, and supports local, container, and cloud deployments. Observability natively supports LangSmith, Arize Phoenix, and HoneyHive. The built-in Agent Network Designer meta-agent enables generating multi-agent configurations directly from natural language descriptions, covering vertical enterprise scenarios such as airline customer service, banking compliance, insurance claims, retail operations, and telecom troubleshooting.

Configuration & Design#

  • HOCON Data-Driven Configuration: The entire multi-agent network is defined through declarative HOCON config files, enabling non-technical domain experts to participate in designing agent interaction logic.
  • Agent Network Designer (Meta-Agent): A built-in meta-agent that creates other agent networks—input natural language descriptions to generate customized multi-agent HOCON configurations.

Orchestration & Communication#

  • AAOSA Protocol: Agents autonomously decide how to delegate subtasks, achieving decentralized self-organizing behavior without a central controller.
  • Frontman Pattern: User queries are routed through a configurable frontman agent that delegates tasks to downstream specialist agents.
  • Multi-Topology Support: Supports linear, hierarchical, DAG, and other network topologies; agents can embed complete sub-networks.

Security & Compliance#

  • Sly-Data Secure Channel: Sensitive data passes through private channels between agents without exposure to any LLM chat flow.
  • Secure by Default: Built-in security mechanisms with optional OpenFGA integration for per-user authorization.

Tooling & Integration#

  • CodedTools Interface: Weaving LLM reasoning with Python deterministic tools, supporting LangChain tool adapters, custom Python tools, APIs, and databases.
  • External Agent Ecosystem: Integration with Agentforce, Agentspace, CrewAI, MCP, A2A, LangChain, and other external frameworks.
  • MCP Protocol Support: Each Neuro SAN server can operate as an MCP Server.

Observability#

  • Robust Traceability: Detailed logging, tracing, session-level metrics with native support for LangSmith, Arize Phoenix, HoneyHive, and other observability platforms.
  • Thinking Process Visualization: Agent thinking processes output to the logs/thinking_dir/ directory.

Deployment & Compatibility#

  • Cloud-Agnostic Architecture: Supports local, container, and cloud deployments with complete Docker deployment scripts.
  • Multi-LLM Provider Compatibility: Compatible with OpenAI, Anthropic, Azure, Ollama, etc., with flexible per-agent model assignment.

Quick Start#

git clone https://github.com/cognizant-ai-lab/neuro-san-studio
cd neuro-san-studio
python -m venv venv
source venv/bin/activate && export PYTHONPATH=`pwd`
pip install -r requirements.txt
export OPENAI_API_KEY="your_key_here"
python -m run
# Access UI: http://localhost:4173/

Unconfirmed Information#

  • README mentions default use of gpt-5.2, but whether this model exists or is a typo remains to be verified.
  • Specific differences between Cognizant Neuro® AI Multi-Agent Accelerator and the open-source version are not detailed on public pages.

Related Projects

View All

STAY UPDATED

Get the latest AI tools and trends delivered straight to your inbox. No spam, just intelligence.