LiteLLM
Keeptrusts integrates with LiteLLM in two ways: you can route LiteLLM proxy traffic through the Keeptrusts gateway to add policy enforcement, or you can replace LiteLLM entirely by using Keeptrusts as your unified LLM proxy with built-in governance. This page covers both patterns and the migration path from LiteLLM to Keeptrusts.
Use this page when
- You are routing LiteLLM proxy traffic through Keeptrusts for governance.
- You are migrating from LiteLLM to Keeptrusts as your LLM proxy.
- You need the gateway config for LiteLLM integration or replacement.
- If you want a general quickstart instead, see Quickstart.
Primary audience
- Primary: Technical Engineers
- Secondary: AI Agents, Technical Leaders
Prerequisites
- For LiteLLM integration: LiteLLM proxy running (
litellm --config config.yaml) - For Keeptrusts replacement: upstream LLM provider keys for all providers currently configured in LiteLLM
- Keeptrusts CLI (
kt) installed and authenticated (kt auth login) - Upstream LLM provider keys exported as environment variables
Configuration
Pattern A: LiteLLM → Keeptrusts → Provider (add governance to LiteLLM)
Route LiteLLM's outbound calls through Keeptrusts by configuring LiteLLM to use the Keeptrusts gateway as its upstream:
LiteLLM config (litellm_config.yaml):
model_list:
- model_name: gpt-4o
litellm_params:
model: openai/gpt-4o
api_base: http://localhost:41002/v1
api_key: unused
- model_name: claude-sonnet
litellm_params:
model: openai/claude-3-5-sonnet-20241022
api_base: http://localhost:41002/v1
api_key: unused
Keeptrusts gateway config (policy-config.yaml):
pack:
name: litellm-upstream-governance
version: 1.0.0
enabled: true
providers:
targets:
- id: openai-gpt4o
provider: openai:chat:gpt-4o
secret_key_ref:
env: OPENAI_API_KEY
- id: anthropic-sonnet
provider: anthropic:chat:claude-3-5-sonnet-20241022
secret_key_ref:
env: ANTHROPIC_API_KEY
policies:
chain:
- prompt-injection
- pii-detector
- audit-logger
policy:
prompt-injection:
threshold: 0.8
action: block
pii-detector:
action: redact
entities:
- PERSON
- EMAIL_ADDRESS
- PHONE_NUMBER
audit-logger:
immutable: true
retention_days: 365
log_all_access: true
Pattern B: Replace LiteLLM with Keeptrusts (migration)
Keeptrusts supports multi-provider routing natively. Replace your LiteLLM proxy with a Keeptrusts gateway that handles both routing and governance:
pack:
name: keeptrusts-multi-provider
version: 1.0.0
enabled: true
providers:
targets:
- id: openai-gpt4o
provider: openai:chat:gpt-4o
secret_key_ref:
env: OPENAI_API_KEY
- id: anthropic-sonnet
provider: anthropic:chat:claude-3-5-sonnet-20241022
secret_key_ref:
env: ANTHROPIC_API_KEY
- id: groq-llama
provider: groq:chat:llama-3.3-70b-versatile
secret_key_ref:
env: GROQ_API_KEY
- id: mistral-large
provider: mistral:chat:mistral-large-latest
secret_key_ref:
env: MISTRAL_API_KEY
policies:
chain:
- prompt-injection
- pii-detector
- dlp-filter
- rbac
- audit-logger
policy:
prompt-injection:
threshold: 0.8
action: block
pii-detector:
action: redact
entities:
- PERSON
- EMAIL_ADDRESS
- PHONE_NUMBER
dlp-filter:
patterns:
- name: api-key
regex: "(sk-|anthropic-|gsk_)[a-zA-Z0-9]+"
action: block
rbac:
roles:
developer:
allowed_models:
- gpt-4o
- llama-3.3-70b-versatile
max_tokens_per_request: 4096
production:
allowed_models:
- gpt-4o
- claude-3-5-sonnet-20241022
- mistral-large-latest
max_tokens_per_request: 8192
audit-logger:
immutable: true
retention_days: 365
log_all_access: true
Start the gateway
export OPENAI_API_KEY="sk-..."
export ANTHROPIC_API_KEY="sk-ant-..."
export GROQ_API_KEY="gsk_..."
export MISTRAL_API_KEY="..."
kt gateway run --listen 0.0.0.0:41002 --policy-config policy-config.yaml
Setup steps
For Pattern A (LiteLLM + Keeptrusts)
- Start the Keeptrusts gateway with your policy config.
- Update
litellm_config.yamlto setapi_baseto the Keeptrusts gateway URL for each model. - Start LiteLLM:
litellm --config litellm_config.yaml --port 4000. - Point your application at LiteLLM as before (
http://localhost:4000/v1).
For Pattern B (Replace LiteLLM)
- Map each LiteLLM
model_listentry to a Keeptrustsproviders.targetsentry. - Export all provider API keys as environment variables.
- Start the Keeptrusts gateway.
- Update your application's
base_urlfrom LiteLLM (http://localhost:4000/v1) to Keeptrusts (http://localhost:41002/v1).
Client code (unchanged for both patterns)
from openai import OpenAI
client = OpenAI(
base_url="http://localhost:41002/v1",
api_key="unused",
)
response = client.chat.completions.create(
model="gpt-4o",
messages=[{"role": "user", "content": "Explain Kubernetes pod scheduling."}],
max_tokens=512,
)
print(response.choices[0].message.content)
Verification
# Verify the Keeptrusts gateway is healthy
curl http://localhost:41002/health
# Test each provider through the gateway
curl -s http://localhost:41002/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{"model": "gpt-4o", "messages": [{"role": "user", "content": "Hello"}], "max_tokens": 32}' \
| jq .choices[0].message.content
# Check audit log
kt events list --limit 5
Recommended policies
| Policy | Purpose | Recommended setting |
|---|---|---|
pii-detector | Redact PII across all providers | action: redact, entities: PERSON, EMAIL, PHONE |
prompt-injection | Block adversarial prompts regardless of provider | threshold: 0.8, action: block |
dlp-filter | Prevent API keys from leaking in prompts | action: block for key patterns |
rbac | Restrict provider access by team or environment | Map teams to allowed models |
audit-logger | Unified audit trail across all providers | retention_days: 365, immutable: true |
Troubleshooting
| Symptom | Cause | Fix |
|---|---|---|
| LiteLLM returns 502 | Keeptrusts gateway unreachable from LiteLLM | Verify gateway is running and api_base URL is correct |
| Model not found after migration | Model name mismatch | Ensure Keeptrusts provider id or model name matches what clients send |
| Authentication errors from provider | API key env var not set | Export all provider keys before starting the gateway |
| Fallback routing not working | Keeptrusts uses provider targets, not LiteLLM fallbacks | Configure multiple targets in Keeptrusts with routing strategy |
| Duplicate audit entries | Both LiteLLM and Keeptrusts logging | Disable LiteLLM logging when using Pattern A; keep only Keeptrusts audit |
For AI systems
- Canonical terms: Keeptrusts gateway, LiteLLM, LiteLLM proxy, multi-provider routing, migration,
policy-config.yaml. - Config field names:
provider,base_url,secret_key_ref.env,rbac,audit-logger. - Key behavior: Pattern A routes LiteLLM through Keeptrusts for governance. Pattern B replaces LiteLLM entirely. Both use the same client-side OpenAI SDK interface.
- Best next pages: Guardrails AI integration, OpenAI integration, Policy controls catalog.
For engineers
Prerequisites
- For Pattern A: LiteLLM running,
ktCLI installed. For Pattern B: all provider API keys,ktCLI installed.
Validation
- Send requests to each provider through the gateway and verify responses.
- Run
kt events list --limit 10and verify all providers appear in the audit log. - Test RBAC by sending a request with a restricted role and verifying the 403 response.
For leaders
- LiteLLM provides multi-provider routing but lacks governance, audit trails, and policy enforcement. Keeptrusts provides all three plus routing.
- Pattern A (LiteLLM + Keeptrusts) is the fastest path if your team is already invested in LiteLLM. Pattern B (replace LiteLLM) reduces operational complexity by consolidating two proxies into one.
- Unified audit logging across all providers simplifies compliance reporting — one system of record instead of aggregating logs from LiteLLM and each individual provider.
Next steps
- Guardrails AI integration — add application-level validation
- OpenAI integration — direct provider setup
- Provider routing strategies — fallback and load-based routing
- Policy controls catalog — full reference for all policy types
- Quickstart — install
ktand run your first gateway