📄️ Quick Start
Quick start CLI, Config, Docker
📄️ 🐳 Docker, Deploying LiteLLM Proxy
You can find the Dockerfile to build litellm proxy here
📄️ ⚡ Best Practices for Production
1. Use this config.yaml
🔗 📖 All Endpoints (Swagger)
📄️ ✨ Enterprise Features - SSO, Audit Logs, Guardrails
Get in touch with us here
📄️ 🎉 Demo App
Here is a demo of the proxy. To log in pass in:
📄️ Proxy Config.yaml
Set model list, apibase, apikey, temperature & proxy server settings (master-key) on the config.yaml.
📄️ 🔥 Load Balancing, Fallbacks, Retries, Timeouts
- Quick Start load balancing
📄️ 💸 Spend Tracking
Track spend for keys, users, and teams across 100+ LLMs.
📄️ 🤗 UI - Self-Serve
Allow users to creat their own keys on Proxy UI.
📄️ 💰 Budgets, Rate Limits
Requirements:
📄️ 💰 Setting Team Budgets
Track spend, set budgets for your Internal Team
📄️ 🙋♂️ Customers
Track spend, set budgets for your customers.
📄️ 💵 Billing
Bill internal teams, external customers for their usage
📄️ Use with Langchain, OpenAI SDK, LlamaIndex, Curl
Input, Output, Exceptions are mapped to the OpenAI format for all supported models
📄️ 🔑 Virtual Keys
Track Spend, and control model access via virtual keys for the proxy
📄️ 🚨 Alerting / Webhooks
Get alerts for:
🗃️ 🪢 Logging
2 items
📄️ [BETA] UI - Admin
Create keys, track spend, add models without worrying about the config / CRUD endpoints.
📄️ 📈 Prometheus metrics [BETA]
LiteLLM Exposes a /metrics endpoint for Prometheus to Poll
📄️ ✨ 📧 Email Notifications
Send an Email to your users when:
📄️ ✨ Attribute Management changes to Users
Call management endpoints on behalf of a user. (Useful when connecting proxy to your development platform).
📄️ 👥 Team-based Routing + Logging
Routing
📄️ Region-based Routing
Route specific customers to eu-only models.
📄️ [BETA] JWT-based Auth
Use JWT's to auth admins / projects into the proxy.
🗃️ Extra Load Balancing
1 items
📄️ Model Management
Add new models + Get model info without restarting proxy.
📄️ Health Checks
Use this to health check all LLMs defined in your config.yaml
📄️ Debugging
2 levels of debugging supported.
📄️ PII Masking
LiteLLM supports Microsoft Presidio for PII masking.
📄️ 🕵️ Prompt Injection Detection
LiteLLM Supports the following methods for detecting prompt injection attacks
📄️ Caching
Cache LLM Responses
📄️ Modify / Reject Incoming Requests
- Modify data before making llm api calls on proxy
📄️ Post-Call Rules
Use this to fail a request based on the output of an llm api call.
📄️ CLI Arguments
Cli arguments, --host, --port, --num_workers