DISCOVER THE FUTURE OF AI AGENTS

All Projects

43 projects

PromptHub

An open-source, free all-in-one workspace for AI Prompt and Skill management, featuring versioned Prompt editing, multi-platform Skill distribution, multi-model parallel testing, and local-first data synchronization.

Model & Inference FrameworkLarge Language ModelsElectron

Nemo Skills

A full-stack LLM development toolkit from NVIDIA covering synthetic data generation, multi-backend inference, model training, and 11-category benchmark evaluation, scaling from single GPU to tens-of-thousands-GPU Slurm clusters.

Deep LearningAI AgentsModel Context Protocol

NeMo Gym

🧠

An RL training environment building library for LLMs, providing complete infrastructure from development and testing to scaled rollout collection, with built-in RLVR scenarios and tool-calling support.

OtherDeep LearningAI Agents

MLflow

The Open Source AI Engineering Platform for Agents, LLMs & Models, providing experiment tracking, model registry, LLM observability, evaluation, prompt optimization, and a unified AI gateway.

Model & Inference FrameworkSDKAI Agents

RuVector

A self-learning vector database integrating GNN-driven search optimization, local LLM inference, Cypher graph queries, and a PostgreSQL vector extension, deployable from WASM embeddings to Raft-distributed clusters.

Docs, Tutorials & ResourcesMachine LearningRAG

llmfit

A Rust-based cross-platform CLI tool that right-sizes LLM models to your system's RAM, CPU, and GPU by detecting specs and recommending optimal models and quantization strategies. Covers 206 models from 57 providers.

Model & Inference FrameworkLarge Language ModelsRust

free-llm-api-resources

A curated list of free LLM inference APIs, covering rate limits, model lists, and special requirements for major platforms like OpenRouter, Google AI Studio, Groq, and Cerebras. Ideal for developers in the prototyping phase.

Large Language ModelsPythonKnowledge Base

nanochat

A minimal, hackable experimental harness for training LLMs on a single GPU node, covering all stages from pretraining to a ChatGPT-like UI.

Large Language ModelsTransformersPyTorch

Vision-Agents

An open-source framework by Stream for building vision AI agents that work with any model or video provider, leveraging Stream's edge network for ultra-low latency video experiences.

Agent & ToolingPythonPyTorch

AirLLM

AirLLM optimizes inference memory usage, enabling 70B large language models to run on a single 4GB GPU card without quantization, distillation, or pruning. It now also supports running 405B Llama3.1 models on 8GB VRAM.

Model & Inference FrameworkPythonPyTorch
Per page

Page 1 / 5 · 43 total

STAY UPDATED

Get the latest AI tools and trends delivered straight to your inbox. No spam, just intelligence.