LLM Gateway Documentation
Everything you need to deploy, configure, secure, and scale your production LLM infrastructure with RealTimeDetect.
Get Started
Install and run RealTimeDetect LLM Gateway, or route traffic to your first LLM provider in minutes.
Quickstart →Concepts
Learn about gateway architecture, request routing, provider adapters, and core terminology.
Read Concepts →Gateway Setup
Configure listeners, define routes, and wire up your provider backends with flexible YAML config.
Configure →LLM Providers
Connect to OpenAI, Anthropic, Azure OpenAI, Google Gemini, Meta Llama, and Mistral AI.
View Providers →Traffic Management
Smart routing, weighted load balancing, cost-based failover, retries, and circuit breakers.
Manage Traffic →Security
Secure your gateway with API keys, JWT verification, rate limiting, and OAuth 2.0 / OIDC.
Secure Gateway →Observability
Monitor with Prometheus metrics, structured JSON logging, and OpenTelemetry distributed tracing.
Add Observability →Reference
Full API reference, YAML config schema, environment variables, and release changelog.
View Reference →