Superagent LogoSuperagent

Providers

Supported LLM providers and configuration

Providers

Safety Agent works with any language model. Use the provider/model format when specifying models.

Supported Providers

ProviderModel FormatRequired Env Variables
Superagentsuperagent/{model}None (default for guard)
Anthropicanthropic/{model}ANTHROPIC_API_KEY
AWS Bedrockbedrock/{model}AWS_BEDROCK_API_KEY
AWS_BEDROCK_REGION (optional)
Fireworksfireworks/{model}FIREWORKS_API_KEY
Googlegoogle/{model}GOOGLE_API_KEY
Groqgroq/{model}GROQ_API_KEY
OpenAIopenai/{model}OPENAI_API_KEY
OpenRouteropenrouter/{provider}/{model}OPENROUTER_API_KEY
Vercel AI Gatewayvercel/{provider}/{model}AI_GATEWAY_API_KEY

Environment Setup

The Superagent guard model is used by default and requires no API keys. Set the appropriate API key environment variable only if you want to use a different provider or need the redact() method (which requires a model):

# Superagent (optional - for usage tracking only)
export SUPERAGENT_API_KEY=your-key

# OpenAI
export OPENAI_API_KEY=sk-...

# Anthropic
export ANTHROPIC_API_KEY=sk-ant-...

# Google
export GOOGLE_API_KEY=...

# Groq
export GROQ_API_KEY=gsk_...

# Fireworks
export FIREWORKS_API_KEY=...

# AWS Bedrock
export AWS_BEDROCK_API_KEY=...
export AWS_BEDROCK_REGION=us-east-1  # optional

# OpenRouter
export OPENROUTER_API_KEY=...

# Vercel AI Gateway
export AI_GATEWAY_API_KEY=...

Usage Examples

import { createClient } from "@superagent-ai/safety-agent";

const client = createClient();

// Superagent (default - no API key required)
await client.guard({
  input: "user message"
  // model defaults to superagent/guard-0.6b
});

// Or specify Superagent model explicitly
await client.guard({
  input: "user message",
  model: "superagent/guard-0.6b"
});

// OpenAI
await client.guard({
  input: "user message",
  model: "openai/gpt-4o-mini"
});

// Anthropic
await client.guard({
  input: "user message",
  model: "anthropic/claude-3-5-sonnet-20241022"
});

// Google
await client.guard({
  input: "user message",
  model: "google/gemini-1.5-pro"
});

// Groq
await client.guard({
  input: "user message",
  model: "groq/llama-3.1-70b-versatile"
});

// OpenRouter (nested provider/model)
await client.guard({
  input: "user message",
  model: "openrouter/anthropic/claude-3-5-sonnet"
});

Vision-Capable Models

For image analysis, use a vision-capable model:

ProviderVision Models
OpenAIgpt-4o, gpt-4o-mini, gpt-4-turbo, gpt-4.1
Anthropicclaude-3-*, claude-sonnet-4-*, claude-opus-4-*, claude-haiku-4-*
Googlegemini-*

Other providers (Fireworks, Groq, OpenRouter, Vercel, Bedrock) currently support text-only analysis.

Choosing a Model

Default: Superagent Guard Model

The default superagent/guard-0.6b model is recommended for most guard use cases:

  • No API keys required - works out of the box
  • Low latency - optimized for fast classification
  • Proven accuracy - purpose-trained for safety classification
  • No cost - free to use

Other Models

Consider these factors when selecting a different model:

  • Latency: Smaller models like gpt-4o-mini or claude-3-haiku are faster
  • Accuracy: Larger models like gpt-4o or claude-3-5-sonnet may catch more edge cases
  • Cost: Varies significantly by provider and model size
  • Compliance: Some providers offer data residency or compliance certifications

For guard use cases where you need a different provider, openai/gpt-4o-mini or anthropic/claude-3-haiku-20240307 provide a good balance of speed, accuracy, and cost.