Skip to main content
Browse docs
By Audience
Getting Started
Configuration
Use Cases
IDE Integration
Third-Party Integrations
Engineering Cache
Console
API Reference
Gateway
Workflow Guides
Templates
Providers and SDKs
Industry Guides
Advanced Guides
Browse by Role
Deployment Guides
In-Depth Guides
Tutorials
FAQ

LiteLLM

Keeptrusts integrates with LiteLLM in two ways: you can route LiteLLM proxy traffic through the Keeptrusts gateway to add policy enforcement, or you can replace LiteLLM entirely by using Keeptrusts as your unified LLM proxy with built-in governance. This page covers both patterns and the migration path from LiteLLM to Keeptrusts.

Use this page when

  • You are routing LiteLLM proxy traffic through Keeptrusts for governance.
  • You are migrating from LiteLLM to Keeptrusts as your LLM proxy.
  • You need the gateway config for LiteLLM integration or replacement.
  • If you want a general quickstart instead, see Quickstart.

Primary audience

  • Primary: Technical Engineers
  • Secondary: AI Agents, Technical Leaders

Prerequisites

  • For LiteLLM integration: LiteLLM proxy running (litellm --config config.yaml)
  • For Keeptrusts replacement: upstream LLM provider keys for all providers currently configured in LiteLLM
  • Keeptrusts CLI (kt) installed and authenticated (kt auth login)
  • Upstream LLM provider keys exported as environment variables

Configuration

Pattern A: LiteLLM → Keeptrusts → Provider (add governance to LiteLLM)

Route LiteLLM's outbound calls through Keeptrusts by configuring LiteLLM to use the Keeptrusts gateway as its upstream:

LiteLLM config (litellm_config.yaml):

model_list:
- model_name: gpt-4o
litellm_params:
model: openai/gpt-4o
api_base: http://localhost:41002/v1
api_key: unused
- model_name: claude-sonnet
litellm_params:
model: openai/claude-3-5-sonnet-20241022
api_base: http://localhost:41002/v1
api_key: unused

Keeptrusts gateway config (policy-config.yaml):

pack:
name: litellm-upstream-governance
version: 1.0.0
enabled: true
providers:
targets:
- id: openai-gpt4o
provider: openai:chat:gpt-4o
secret_key_ref:
env: OPENAI_API_KEY
- id: anthropic-sonnet
provider: anthropic:chat:claude-3-5-sonnet-20241022
secret_key_ref:
env: ANTHROPIC_API_KEY
policies:
chain:
- prompt-injection
- pii-detector
- audit-logger
policy:
prompt-injection:
threshold: 0.8
action: block
pii-detector:
action: redact
entities:
- PERSON
- EMAIL_ADDRESS
- PHONE_NUMBER
audit-logger:
immutable: true
retention_days: 365
log_all_access: true

Pattern B: Replace LiteLLM with Keeptrusts (migration)

Keeptrusts supports multi-provider routing natively. Replace your LiteLLM proxy with a Keeptrusts gateway that handles both routing and governance:

pack:
name: keeptrusts-multi-provider
version: 1.0.0
enabled: true
providers:
targets:
- id: openai-gpt4o
provider: openai:chat:gpt-4o
secret_key_ref:
env: OPENAI_API_KEY
- id: anthropic-sonnet
provider: anthropic:chat:claude-3-5-sonnet-20241022
secret_key_ref:
env: ANTHROPIC_API_KEY
- id: groq-llama
provider: groq:chat:llama-3.3-70b-versatile
secret_key_ref:
env: GROQ_API_KEY
- id: mistral-large
provider: mistral:chat:mistral-large-latest
secret_key_ref:
env: MISTRAL_API_KEY
policies:
chain:
- prompt-injection
- pii-detector
- dlp-filter
- rbac
- audit-logger
policy:
prompt-injection:
threshold: 0.8
action: block
pii-detector:
action: redact
entities:
- PERSON
- EMAIL_ADDRESS
- PHONE_NUMBER
dlp-filter:
patterns:
- name: api-key
regex: "(sk-|anthropic-|gsk_)[a-zA-Z0-9]+"
action: block
rbac:
roles:
developer:
allowed_models:
- gpt-4o
- llama-3.3-70b-versatile
max_tokens_per_request: 4096
production:
allowed_models:
- gpt-4o
- claude-3-5-sonnet-20241022
- mistral-large-latest
max_tokens_per_request: 8192
audit-logger:
immutable: true
retention_days: 365
log_all_access: true

Start the gateway

export OPENAI_API_KEY="sk-..."
export ANTHROPIC_API_KEY="sk-ant-..."
export GROQ_API_KEY="gsk_..."
export MISTRAL_API_KEY="..."
kt gateway run --listen 0.0.0.0:41002 --policy-config policy-config.yaml

Setup steps

For Pattern A (LiteLLM + Keeptrusts)

  1. Start the Keeptrusts gateway with your policy config.
  2. Update litellm_config.yaml to set api_base to the Keeptrusts gateway URL for each model.
  3. Start LiteLLM: litellm --config litellm_config.yaml --port 4000.
  4. Point your application at LiteLLM as before (http://localhost:4000/v1).

For Pattern B (Replace LiteLLM)

  1. Map each LiteLLM model_list entry to a Keeptrusts providers.targets entry.
  2. Export all provider API keys as environment variables.
  3. Start the Keeptrusts gateway.
  4. Update your application's base_url from LiteLLM (http://localhost:4000/v1) to Keeptrusts (http://localhost:41002/v1).

Client code (unchanged for both patterns)

from openai import OpenAI

client = OpenAI(
base_url="http://localhost:41002/v1",
api_key="unused",
)

response = client.chat.completions.create(
model="gpt-4o",
messages=[{"role": "user", "content": "Explain Kubernetes pod scheduling."}],
max_tokens=512,
)
print(response.choices[0].message.content)

Verification

# Verify the Keeptrusts gateway is healthy
curl http://localhost:41002/health

# Test each provider through the gateway
curl -s http://localhost:41002/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{"model": "gpt-4o", "messages": [{"role": "user", "content": "Hello"}], "max_tokens": 32}' \
| jq .choices[0].message.content

# Check audit log
kt events list --limit 5
PolicyPurposeRecommended setting
pii-detectorRedact PII across all providersaction: redact, entities: PERSON, EMAIL, PHONE
prompt-injectionBlock adversarial prompts regardless of providerthreshold: 0.8, action: block
dlp-filterPrevent API keys from leaking in promptsaction: block for key patterns
rbacRestrict provider access by team or environmentMap teams to allowed models
audit-loggerUnified audit trail across all providersretention_days: 365, immutable: true

Troubleshooting

SymptomCauseFix
LiteLLM returns 502Keeptrusts gateway unreachable from LiteLLMVerify gateway is running and api_base URL is correct
Model not found after migrationModel name mismatchEnsure Keeptrusts provider id or model name matches what clients send
Authentication errors from providerAPI key env var not setExport all provider keys before starting the gateway
Fallback routing not workingKeeptrusts uses provider targets, not LiteLLM fallbacksConfigure multiple targets in Keeptrusts with routing strategy
Duplicate audit entriesBoth LiteLLM and Keeptrusts loggingDisable LiteLLM logging when using Pattern A; keep only Keeptrusts audit

For AI systems

  • Canonical terms: Keeptrusts gateway, LiteLLM, LiteLLM proxy, multi-provider routing, migration, policy-config.yaml.
  • Config field names: provider, base_url, secret_key_ref.env, rbac, audit-logger.
  • Key behavior: Pattern A routes LiteLLM through Keeptrusts for governance. Pattern B replaces LiteLLM entirely. Both use the same client-side OpenAI SDK interface.
  • Best next pages: Guardrails AI integration, OpenAI integration, Policy controls catalog.

For engineers

Prerequisites

  • For Pattern A: LiteLLM running, kt CLI installed. For Pattern B: all provider API keys, kt CLI installed.

Validation

  • Send requests to each provider through the gateway and verify responses.
  • Run kt events list --limit 10 and verify all providers appear in the audit log.
  • Test RBAC by sending a request with a restricted role and verifying the 403 response.

For leaders

  • LiteLLM provides multi-provider routing but lacks governance, audit trails, and policy enforcement. Keeptrusts provides all three plus routing.
  • Pattern A (LiteLLM + Keeptrusts) is the fastest path if your team is already invested in LiteLLM. Pattern B (replace LiteLLM) reduces operational complexity by consolidating two proxies into one.
  • Unified audit logging across all providers simplifies compliance reporting — one system of record instead of aggregating logs from LiteLLM and each individual provider.

Next steps