DISCOVER THE FUTURE OF AI AGENTS

All Projects

27 projects

vllm-mlx

🧠

A vLLM-style inference server for Apple Silicon with a native MLX backend, exposing both OpenAI and Anthropic compatible APIs in a single process, featuring multimodal unified serving, continuous batching, paged KV cache, and SSD-tiered caching.

MultimodalLarge Language ModelsPython

UncommonRoute

A local proxy that automatically routes each LLM request to the cheapest still-capable model

Model & Inference FrameworkAI AgentsLarge Language Models

Hyperspace AGI

The first experimental fully peer-to-peer distributed AGI system where intelligence compounds continuously through autonomous agent networks, supporting decentralized training across heterogeneous devices, P2P inference routing, and a built-in blockchain micropayment economy.

Model & Inference FrameworkMulti-Agent SystemAI Agents

Rapid-MLX

A local AI inference engine for Apple Silicon with OpenAI-compatible API, supporting multi-modal, tool calling, and smart cloud routing.

AI AgentsLarge Language ModelsModel Context Protocol

OpenJarvis

A local-first personal AI agent framework from Stanford that enables offline agent orchestration, skill import, and trace-driven continuous learning through five composable primitives, supporting 10+ inference backends and four interaction modes.

OtherLarge Language ModelsModel Context Protocol

vLLM-Omni

🧠

A fully disaggregated multimodal model inference and serving framework that extends vLLM to support any-to-any modality unified inference and high-performance deployment.

Deep LearningMultimodalFastAPI

Harbor

🧠

A Docker Compose-based CLI orchestrator for local LLM stacks — spin up pre-wired inference backends, frontend UIs, RAG, voice, image generation, and more with a single command

Model & Inference FrameworkMultimodalLarge Language Models

Mooncake

A KVCache-centric disaggregated architecture platform for LLM serving, providing distributed KVCache pooling, topology-aware high-speed transfer engine, and centralized scheduler, supporting Prefill-Decode separation and MoE elastic inference.

Large Language ModelsRustPyTorch

llama.cpp

LLM inference in C/C++ achieving state-of-the-art performance on local or cloud with minimal setup via the GGUF format and multi-hardware backend support.

Large Language ModelsPythonCLI

mlx-openai-server

A high-performance OpenAI-compatible API server for MLX models on Apple Silicon, supporting text, vision, audio transcription, and image generation/editing.

Deep LearningLarge Language ModelsMultimodal
Per page

Page 1 / 3 · 27 total

STAY UPDATED

Get the latest AI tools and trends delivered straight to your inbox. No spam, just intelligence.