A full-stack LLMOps platform for LLM applications, integrating Agent simulations, observability, evaluation, and prompt management, with sub-millisecond AI Gateway governance.
LangWatch is a full-stack LLMOps platform developed by Reasoning Engine B.V., designed to address observability, evaluation, and governance challenges for LLM and Agent applications in development and production. The platform utilizes a polyglot architecture: the frontend and core services are built with TypeScript, the AI Gateway is written in Go to achieve sub-millisecond latency (~11 µs, embedding the bifrost/core scheduling library), and heavy NLP evaluation tasks are handled by a Python backend, backed by PostgreSQL, Redis, ClickHouse, and OpenSearch.
Core capabilities span four major areas: first, full-chain Trace tracking and automated evaluation loops based on the OpenTelemetry/OTLP protocol (Trace → Dataset → Evaluate → Optimize → Re-test); second, full-stack end-to-end Agent simulations that mimic tools, states, and user behaviors to pinpoint failure nodes accurately; third, a high-performance AI Gateway providing OpenAI/Anthropic compatible proxy endpoints, a virtual key system, tiered budget controls (Organization → Team → Project → VK → Subject, with soft warnings and hard blocks), inline Guardrails (PII redaction, prompt injection detection, etc.), Provider Fallback chains, and Model Aliases for one-click provider switching; fourth, Git-integrated Prompt management and an optimization studio (supporting DSPy). Additionally, the platform offers robust team collaboration mechanisms (run reviews, failure annotation, Annotation Queue), complies with GDPR and ISO 27001 standards, and provides a LangWatch MCP Server for integration with MCP clients like Claude Desktop.
Regarding integration, LangWatch maintains strict framework agnosticism. In addition to providing Python/TypeScript/Go SDKs, it natively supports OTLP protocol ingestion and officially adapts to over 20 mainstream development frameworks including LangChain, LlamaIndex, CrewAI, Vercel AI SDK, Google ADK, Semantic Kernel, Spring AI, as well as no-code platforms like n8n, LangFlow, and Flowise. Deployment options are flexible, supporting one-click local startup (npx @langwatch/server), Docker Compose, Kubernetes Helm Charts, cloud-native OnPrem, and hybrid deployment with data localization.
Deployment Options
- Cloud SaaS: Create a free account at https://app.langwatch.ai, obtain an API Key, and integrate the SDK
- Local One-Click:
npx @langwatch/server(auto-installs dependencies and starts all services at http://localhost:5560) - Docker Compose: Clone the repo and run
docker compose up -d --wait --build - Kubernetes: Deploy using Helm Charts in the
charts/directory, including a standalone AI Gateway sub-chart
AI Gateway Integration Example
# OpenAI compatible
export OPENAI_BASE_URL=https://gateway.langwatch.ai/v1
export OPENAI_API_KEY=lw_vk_live_...
# Anthropic compatible
export ANTHROPIC_BASE_URL=https://gateway.langwatch.ai/v1
export ANTHROPIC_AUTH_TOKEN=lw_vk_live_...
Note: The source code is released under the Business Source License 1.1 (Change Date 2099-12-31, converting to Apache-2.0 thereafter). While the source is visible, commercial use in production requires appropriate commercial licensing. The GitHub repository page shows conflicting license labels ("Unknown" and "Apache-2.0"); the LICENSE.md file in the repository is authoritative.