LiteLLM Integration
Integrate Superagent with LiteLLM by configuring custom endpoints and proxy settings
Superagent provides seamless integration with LiteLLM. Configure LiteLLM to route requests through your Superagent proxy to add AI firewall protection to your multi-provider LLM applications.
Python SDK with Custom Endpoint
from litellm import completion
import os
# Set your API keys
os.environ["OPENAI_API_KEY"] = "your-openai-api-key"
# Route through Superagent proxy
response = completion(
model="openai/gpt-4o",
messages=[{"content": "Hello world", "role": "user"}],
api_base="YOUR_SUPERAGENT_PROXY_URL" # Replace with your Superagent proxy
)
print(response.choices[0].message.content)
LiteLLM Proxy Configuration
model_list:
- model_name: gpt-4o
litellm_params:
model: openai/gpt-4o
api_base: YOUR_SUPERAGENT_PROXY_URL # Replace with your Superagent proxy
api_key: "os.environ/OPENAI_API_KEY"
- model_name: claude-3-opus
litellm_params:
model: anthropic/claude-3-opus-20240229
api_base: YOUR_SUPERAGENT_PROXY_URL # Replace with your Superagent proxy
api_key: "os.environ/ANTHROPIC_API_KEY"
- model_name: grok-3
litellm_params:
model: xai/grok-3
api_base: YOUR_SUPERAGENT_PROXY_URL # Replace with your Superagent proxy
api_key: "os.environ/XAI_API_KEY"
Running LiteLLM Proxy
# Install LiteLLM
pip install 'litellm[proxy]'
# Start the proxy with your config
litellm --config config.yaml
# The proxy will run on http://0.0.0.0:4000
Using the Protected Proxy
import openai
# Connect to your LiteLLM proxy (now protected by Superagent)
client = openai.OpenAI(
api_key="anything", # LiteLLM proxy doesn't require authentication by default
base_url="http://localhost:4000" # Your LiteLLM proxy URL
)
# Use any configured model
response = client.chat.completions.create(
model="gpt-4o", # Model name from your config
messages=[{"role": "user", "content": "Hello world"}]
)
TypeScript/JavaScript Usage
import OpenAI from 'openai';
// Connect to your protected LiteLLM proxy
const client = new OpenAI({
apiKey: 'anything', // LiteLLM proxy authentication
baseURL: 'http://localhost:4000', // Your LiteLLM proxy URL
});
// Use any configured model
const response = await client.chat.completions.create({
model: 'claude-3-opus', // Model name from your config
messages: [{ role: 'user', content: 'Hello world' }],
});
Load Balancing with Multiple Providers
model_list:
# Multiple OpenAI deployments through Superagent
- model_name: gpt-4o
litellm_params:
model: openai/gpt-4o
api_base: YOUR_SUPERAGENT_PROXY_URL_1
api_key: "os.environ/OPENAI_API_KEY"
- model_name: gpt-4o # Same model name for load balancing
litellm_params:
model: openai/gpt-4o
api_base: YOUR_SUPERAGENT_PROXY_URL_2
api_key: "os.environ/OPENAI_API_KEY"
router_settings:
routing_strategy: "simple-shuffle" # Load balance between deployments
fallbacks: [{"claude-3-opus": ["gpt-4o"]}] # Fallback routing
Streaming Support
from litellm import completion
# Streaming through Superagent proxy
response = completion(
model="openai/gpt-4o",
messages=[{"content": "Write a story", "role": "user"}],
api_base="YOUR_SUPERAGENT_PROXY_URL",
stream=True
)
for chunk in response:
if hasattr(chunk.choices[0].delta, 'content') and chunk.choices[0].delta.content:
print(chunk.choices[0].delta.content, end="")
Docker Deployment
FROM ghcr.io/berriai/litellm:main-latest
# Copy your config
COPY config.yaml /app/config.yaml
# Expose port
EXPOSE 4000
# Start with config
CMD ["--config", "/app/config.yaml"]
Configuration Benefits
- Multi-Provider Security: Protect requests to 100+ LLM providers
- Centralized Management: Single proxy for all your LLM routing with added security
- Load Balancing: Distribute load while maintaining security
- Fallback Protection: Automatic fallbacks with consistent security
- Cost Tracking: LiteLLM's cost tracking with Superagent protection
Migration Notes
- Existing Models: All LiteLLM supported models work unchanged
- OpenAI Compatible: Maintain OpenAI SDK compatibility
- Configuration Driven: Simple YAML config updates to add protection
- Streaming Support: Full streaming capability preserved
- Added Security: All LLM requests now protected by Superagent firewall
Configure LiteLLM with your Superagent proxy URLs to add comprehensive security to your multi-provider LLM gateway!