Anthropic
Keeptrusts gateways Anthropic's Claude API with full policy enforcement, audit logging, and automatic format translation. Clients can send requests in standard OpenAI format — Keeptrusts translates them to Anthropic's native wire format on the fly and translates responses back. Direct Anthropic-format requests are also supported natively.
Use this page when
- You need the exact command, config, API, or integration details for Anthropic.
- You are wiring automation or AI retrieval and need canonical names, examples, and constraints.
- If you want a guided rollout instead of a reference page, use the linked workflow pages in Next steps.
Primary audience
- Primary: AI Agents, Technical Engineers
- Secondary: Technical Leaders
Prerequisites
- Anthropic API key — obtain one from the Anthropic Console.
- Keeptrusts CLI — install
kt(quickstart guide). - Export your API key:
export ANTHROPIC_API_KEY="sk-ant-your-key-here"
Keeptrusts auto-detects ANTHROPIC_API_KEY when provider is set to "anthropic". The correct auth header (x-api-key) and empty prefix are applied automatically.
Configuration
Create a policy-config.yaml with your provider targets:
pack:
name: anthropic-gateway
version: 1.0.0
enabled: true
policies:
chain:
- prompt-injection
- pii-detector
- audit-logger
providers:
strategy: single
targets:
- id: anthropic-sonnet
provider: anthropic
model: claude-sonnet-4-20250514
base_url: https://api.anthropic.com
secret_key_ref:
env: ANTHROPIC_API_KEY
provider_type: anthropic
format: anthropic
Start the gateway:
kt gateway run \
--listen 0.0.0.0:41002 \
--policy-config policy-config.yaml
In the recommended workflow, the Anthropic target in policy-config.yaml stays authoritative. The gateway reads ANTHROPIC_API_KEY through secret_key_ref instead of --upstream overrides.
Provider Fields
All fields available on a providers.targets[] entry for Anthropic:
| Field | Type | Default | Description |
|---|---|---|---|
id | string | required | Unique identifier for this target |
provider | string | required | Provider ID: "anthropic" or "anthropic:messages:claude-sonnet-4-20250514" |
model | string | required | Model name, e.g. "claude-sonnet-4-20250514" |
base_url | string | https://api.anthropic.com | API base URL (auto-detected for Anthropic) |
secret_key_ref | object | ANTHROPIC_API_KEY | Object reference to the environment variable holding the API key |
api_key_header | string | x-api-key | HTTP header used for authentication (auto-detected) |
api_key_prefix | string | "" | Prefix prepended to the key value — empty for Anthropic |
anthropic_version | string | 2023-06-01 | Value for the anthropic-version request header |
timeout_seconds | integer | 60 | Maximum time for non-streaming requests |
stream_timeout_seconds | integer | none | Maximum time for streaming requests; falls back to timeout_seconds |
max_context_tokens | integer | none | Maximum tokens in the context window |
max_messages | integer | none | Maximum number of messages to retain in the conversation |
headers | map | {} | Additional HTTP headers sent with each request |
format | string | "anthropic" | Wire format: "anthropic" (auto-translates to/from OpenAI) |
provider_type | string | "anthropic" | Explicit provider type; overrides URL heuristic |
description | string | none | Human-readable description for dashboards and logs |
weight | float | 1.0 | Routing weight for weighted_round_robin strategy |
data_policy | object | none | Data handling policy (zero_data_retention, training_opt_out, retention_days) |
pricing | object | none | Token pricing in USD per 1M tokens (prompt, completion) |
health_probe | object | none | Active health probe configuration |
Authentication
Anthropic uses the x-api-key header with no prefix — different from the standard Authorization: Bearer pattern. Keeptrusts auto-detects this when provider is "anthropic":
# These are the defaults — you only need to set secret_key_ref
secret_key_ref:
env: "ANTHROPIC_API_KEY"
api_key_header: "x-api-key"
api_key_prefix: ""
Authorization: Bearer, override api_key_header and api_key_prefix accordingly.Supported Models
| Model | Context Window | Notes |
|---|---|---|
claude-sonnet-4-20250514 | 200K | Latest balanced model |
claude-opus-4-20250514 | 200K | Most capable model |
claude-3.5-sonnet | 200K | Previous generation balanced |
claude-3-opus | 200K | Previous generation high capability |
claude-3-haiku | 200K | Fastest, most cost-effective |
Any model available on the Anthropic API can be used — set the model field to the model ID string.
Client Examples
Once the gateway is running, point your client to http://localhost:8080. Clients can send requests in OpenAI format — Keeptrusts translates automatically — or in native Anthropic format.
- Python (OpenAI SDK)
- Python (Anthropic SDK)
- Node.js
- cURL
from openai import OpenAI
# Use the OpenAI SDK — Keeptrusts translates to Anthropic format automatically
client = OpenAI(
base_url="http://localhost:8080/v1",
api_key="sk-ant-your-key",
)
response = client.chat.completions.create(
model="claude-sonnet-4-20250514",
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Explain quantum computing in one paragraph."},
],
temperature=0.7,
max_tokens=256,
)
print(response.choices[0].message.content)
import anthropic
client = anthropic.Anthropic(
base_url="http://localhost:8080",
api_key="sk-ant-your-key",
)
message = client.messages.create(
model="claude-sonnet-4-20250514",
max_tokens=1024,
messages=[
{"role": "user", "content": "Explain quantum computing in one paragraph."},
],
)
print(message.content[0].text)
import OpenAI from "openai";
// OpenAI SDK works — Keeptrusts handles format translation
const client = new OpenAI({
baseURL: "http://localhost:8080/v1",
apiKey: "sk-ant-your-key",
});
const response = await client.chat.completions.create({
model: "claude-sonnet-4-20250514",
messages: [
{ role: "system", content: "You are a helpful assistant." },
{ role: "user", content: "Explain quantum computing in one paragraph." },
],
temperature: 0.7,
max_tokens: 256,
});
console.log(response.choices[0].message.content);
# OpenAI-compatible format — Keeptrusts translates to Anthropic wire format
curl http://localhost:8080/v1/chat/completions \
-H "Authorization: Bearer sk-ant-your-key" \
-H "Content-Type: application/json" \
-d '{
"model": "claude-sonnet-4-20250514",
"messages": [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Explain quantum computing in one paragraph."}
],
"temperature": 0.7,
"max_tokens": 256
}'
# Native Anthropic format
curl http://localhost:8080/v1/messages \
-H "x-api-key: sk-ant-your-key" \
-H "anthropic-version: 2023-06-01" \
-H "Content-Type: application/json" \
-d '{
"model": "claude-sonnet-4-20250514",
"max_tokens": 1024,
"messages": [
{"role": "user", "content": "Explain quantum computing in one paragraph."}
]
}'
Streaming
Keeptrusts supports Anthropic's SSE streaming. Policies are applied to each chunk in real time. Configure a separate streaming timeout for long-running generations:
pack:
name: anthropic-providers-3
version: 1.0.0
enabled: true
providers:
targets:
- id: anthropic-streaming
provider: anthropic
model: claude-sonnet-4-20250514
policies:
chain:
- audit-logger
policy:
audit-logger:
immutable: true
retention_days: 365
log_all_access: true
- Python (OpenAI SDK)
- Python (Anthropic SDK)
- cURL
from openai import OpenAI
client = OpenAI(base_url="http://localhost:8080/v1", api_key="sk-ant-your-key")
stream = client.chat.completions.create(
model="claude-sonnet-4-20250514",
messages=[{"role": "user", "content": "Write a short story about AI."}],
stream=True,
)
for chunk in stream:
if chunk.choices[0].delta.content:
print(chunk.choices[0].delta.content, end="", flush=True)
import anthropic
client = anthropic.Anthropic(base_url="http://localhost:8080", api_key="sk-ant-your-key")
with client.messages.stream(
model="claude-sonnet-4-20250514",
max_tokens=1024,
messages=[{"role": "user", "content": "Write a short story about AI."}],
) as stream:
for text in stream.text_stream:
print(text, end="", flush=True)
curl http://localhost:8080/v1/chat/completions \
-H "Authorization: Bearer sk-ant-your-key" \
-H "Content-Type: application/json" \
-N \
-d '{
"model": "claude-sonnet-4-20250514",
"messages": [{"role": "user", "content": "Write a short story about AI."}],
"stream": true
}'
Advanced Configuration
Multi-Model Fallback
Automatically fail over from Opus to Sonnet when the primary is unavailable:
pack:
name: anthropic-providers-4
version: 1.0.0
enabled: true
providers:
targets:
- id: opus-primary
provider: anthropic
model: claude-opus-4-20250514
secret_key_ref:
env: ANTHROPIC_API_KEY
- id: sonnet-fallback
provider: anthropic
model: claude-sonnet-4-20250514
secret_key_ref:
env: ANTHROPIC_API_KEY
policies:
chain:
- audit-logger
policy:
audit-logger:
immutable: true
retention_days: 365
log_all_access: true
Format Translation
Keeptrusts automatically translates between OpenAI and Anthropic wire formats. Set format: "anthropic" on the target — clients can send standard OpenAI /v1/chat/completions requests and receive OpenAI-shaped responses:
pack:
name: anthropic-providers-5
version: 1.0.0
enabled: true
providers:
targets:
- id: anthropic-translated
provider: anthropic
model: claude-sonnet-4-20250514
secret_key_ref:
env: ANTHROPIC_API_KEY
policies:
chain:
- audit-logger
policy:
audit-logger:
immutable: true
retention_days: 365
log_all_access: true
This means you can swap between OpenAI and Anthropic providers without changing your client code — only the config changes.
Claude Agent SDK (Native Runner)
Keeptrusts supports running the Claude Agent SDK (claude code) as a native execution target. This enables policy-wrapped agentic workflows:
pack:
name: anthropic-providers-6
version: 1.0.0
enabled: true
providers:
targets:
- id: claude-agent
provider: anthropic:claude-agent-sdk:claude-sonnet-4-20250514
model: claude-sonnet-4-20250514
secret_key_ref:
env: ANTHROPIC_API_KEY
policies:
chain:
- audit-logger
policy:
audit-logger:
immutable: true
retention_days: 365
log_all_access: true
Claude Agent SDK Fields
| Field | Type | Default | Description |
|---|---|---|---|
path_to_claude_code_executable | string | none | Path to the claude binary |
working_dir | string | none | Working directory for agent execution |
permission_mode | string | none | Permission mode: "bypassPermissions", "default" |
max_turns | integer | none | Maximum conversation turns |
allow_all_tools | boolean | false | Allow all tools without restriction |
append_allowed_tools | list | [] | Additional tools to allow |
disallowed_tools | list | [] | Tools to explicitly block |
fallback_model | string | none | Model to fall back to for lower-priority tasks |
additional_directories | list | [] | Extra directories the agent can access |
cli_env | map | {} | Environment variables passed to the CLI process |
Circuit Breaker
Temporarily remove unhealthy Anthropic targets from the rotation:
pack:
name: anthropic-providers-7
version: 1.0.0
enabled: true
providers:
targets:
- id: anthropic-main
provider: anthropic
model: claude-sonnet-4-20250514
secret_key_ref:
env: ANTHROPIC_API_KEY
policies:
chain:
- audit-logger
policy:
audit-logger:
immutable: true
retention_days: 365
log_all_access: true
Retry Policy
Retry transient failures automatically:
pack:
name: anthropic-providers-8
version: 1.0.0
enabled: true
providers:
targets:
- id: anthropic-sonnet
provider: anthropic
model: claude-sonnet-4-20250514
secret_key_ref:
env: ANTHROPIC_API_KEY
policies:
chain:
- audit-logger
policy:
audit-logger:
immutable: true
retention_days: 365
log_all_access: true
Zero Data Retention
Enforce that no prompt or completion data is stored:
pack:
name: anthropic-providers-9
version: 1.0.0
enabled: true
providers:
targets:
- id: anthropic-zdr
provider: anthropic
model: claude-sonnet-4-20250514
secret_key_ref:
env: ANTHROPIC_API_KEY
policies:
chain:
- audit-logger
policy:
audit-logger:
immutable: true
retention_days: 365
log_all_access: true
Cross-Provider Fallback
Use Anthropic as primary with OpenAI as fallback — format translation is handled automatically:
pack:
name: anthropic-providers-10
version: 1.0.0
enabled: true
providers:
targets:
- id: anthropic-primary
provider: anthropic
model: claude-sonnet-4-20250514
secret_key_ref:
env: ANTHROPIC_API_KEY
- id: openai-fallback
provider: openai
model: gpt-4o
secret_key_ref:
env: OPENAI_API_KEY
policies:
chain:
- audit-logger
policy:
audit-logger:
immutable: true
retention_days: 365
log_all_access: true
Rate Limiting
Enforce per-provider request rate limits:
pack:
name: anthropic-providers-11
version: 1.0.0
enabled: true
providers:
targets:
- id: anthropic-sonnet
provider: anthropic
model: claude-sonnet-4-20250514
secret_key_ref:
env: ANTHROPIC_API_KEY
policies:
chain:
- audit-logger
policy:
audit-logger:
immutable: true
retention_days: 365
log_all_access: true
Best Practices
- Format translation is automatic — use OpenAI SDKs against Anthropic endpoints without code changes; only the config target changes.
- Use
stream_timeout_secondsfor streaming — Claude's long-form generations can exceed default timeouts. - Set
anthropic_versionexplicitly if you depend on a specific API behavior; otherwise the default2023-06-01is used. - Enable health probes on production targets so routing strategies can react to Anthropic outages.
- Use
data_policyto document and enforce your compliance requirements. - Declare
pricingfor cost dashboards — Anthropic pricing differs significantly between model tiers. - For agentic workloads, use the Claude Agent SDK native runner with strict
disallowed_toolsandpermission_modeto enforce governance on autonomous agent actions. - Use
provider_type: "anthropic"explicitly when routing through API gateways or non-standard URLs.
For AI systems
- Canonical terms: Keeptrusts gateway, Anthropic, Claude, provider target, policy-config.yaml,
provider: "anthropic",format: "anthropic",secret_key_ref, ANTHROPIC_API_KEY. - Config field names:
provider,model,base_url,secret_key_ref.env,api_key_header: "x-api-key",api_key_prefix: "",anthropic_version,format,provider_type,data_policy,zdr. - Provider shorthand:
anthropic:messages:<model>(e.g.,anthropic:messages:claude-sonnet-4-20250514). - Key behavior: Keeptrusts auto-translates between OpenAI and Anthropic wire formats bidirectionally.
- Best next pages: AWS Bedrock integration (Bedrock-hosted Claude), Provider routing, Policy configuration.
For engineers
- Prerequisites: Anthropic API key (
ANTHROPIC_API_KEYenv var),ktCLI installed. - Start command:
kt gateway run --listen 0.0.0.0:41002 --policy-config policy-config.yaml. - Validate with OpenAI SDK:
curl http://localhost:8080/v1/chat/completions -H 'Authorization: Bearer sk-ant-...' -H 'Content-Type: application/json' -d '{"model":"claude-sonnet-4-20250514","messages":[{"role":"user","content":"hello"}]}'. - Format translation is automatic — clients send OpenAI format, Keeptrusts converts to Anthropic's
messagesAPI on the wire. - Auth uses
x-api-keyheader with no Bearer prefix (auto-detected whenprovider: "anthropic"). - For Claude Agent SDK workloads, configure
disallowed_toolsandpermission_modein the agent runner config.
For leaders
- Anthropic's
training_opt_out: trueand optional zero-data-retention (zdr: true) address data-handling compliance requirements. - Format translation means existing OpenAI SDK codebases can adopt Claude without application code changes — reducing migration cost.
- Claude model pricing differs significantly across tiers (Haiku vs Sonnet vs Opus); set
pricingfields for accurate cost attribution. - Cross-provider fallback (Anthropic → OpenAI) provides resilience without vendor lock-in.
Next steps
- AWS Bedrock integration — access Claude through AWS with SigV4 auth and data residency controls
- OpenAI integration — configure cross-provider fallback from Anthropic to OpenAI
- Provider routing strategies — fallback, latency-based, and weighted routing
- Policy configuration — prompt-injection, PII, and safety policy reference
- Quickstart — install
ktand run your first gateway