Skip to main content
Browse docs
By Audience
Getting Started
Configuration
Use Cases
IDE Integration
Third-Party Integrations
Engineering Cache
Console
API Reference
Gateway
Workflow Guides
Templates
Providers and SDKs
Industry Guides
Advanced Guides
Browse by Role
Deployment Guides
In-Depth Guides
Tutorials
FAQ

Claude for Work

Claude for Work (Anthropic's Teams and Enterprise plans) provides API access to Claude models. By routing API traffic through the Keeptrusts gateway, you apply policy controls — PII redaction, prompt-injection blocking, content filtering, and audit logging — to every request before it reaches Anthropic and every response before it reaches your application.

This page covers the API-level integration. Claude's browser-based interface (claude.ai) does not support custom API endpoint routing; governance for browser sessions requires separate network-level controls.

Use this page when

  • You need to route Anthropic Claude API calls through the Keeptrusts gateway.
  • You are building internal tools or automations using the Claude API.
  • If you need the full Anthropic provider reference, see Anthropic integration.

Primary audience

  • Primary: Technical Engineers
  • Secondary: AI Agents, Technical Leaders

Prerequisites

  • An Anthropic API key from your Claude for Work organization
  • Keeptrusts CLI (kt) installed and on your PATH
  • ANTHROPIC_API_KEY exported in your shell or injected via your secrets manager

Configuration

Gateway policy config

pack:
name: claude-for-work-gateway
version: 1.0.0
enabled: true
providers:
targets:
- id: claude-enterprise
provider: anthropic:chat:claude-sonnet-4-20250514
secret_key_ref:
env: ANTHROPIC_API_KEY
policies:
chain:
- prompt-injection
- pii-detector
- audit-logger
policy:
prompt-injection:
threshold: 0.8
action: block
pii-detector:
action: redact
entities:
- EMAIL
- PHONE
- SSN
- CREDIT_CARD
audit-logger:
immutable: true
retention_days: 365
log_all_access: true

Extended thinking variant

For complex reasoning tasks, use Claude with extended thinking enabled:

pack:
name: claude-for-work-thinking
version: 1.0.0
enabled: true
providers:
targets:
- id: claude-thinking
provider: anthropic:chat:claude-sonnet-4-20250514
secret_key_ref:
env: ANTHROPIC_API_KEY
policies:
chain:
- audit-logger
policy:
audit-logger:
immutable: true
retention_days: 365
log_all_access: true

Setup Steps

  1. Export your API key:
export ANTHROPIC_API_KEY="sk-ant-your-enterprise-key"
  1. Save the policy config to policy-config.yaml.

  2. Start the gateway:

kt gateway run --listen 0.0.0.0:41002 --policy-config policy-config.yaml
  1. Point your application at the gateway using the OpenAI-compatible endpoint:
from openai import OpenAI

client = OpenAI(
base_url="http://localhost:41002/v1",
api_key="unused",
)

response = client.chat.completions.create(
model="claude-sonnet-4-20250514",
messages=[
{"role": "system", "content": "You are a precise enterprise assistant."},
{"role": "user", "content": "Review this contract clause for compliance risks."},
],
max_tokens=2048,
)
print(response.choices[0].message.content)

Keeptrusts auto-translates OpenAI-format requests to Anthropic's Messages API. Your client code does not need modification.

For a hosted gateway, replace the base URL with https://gateway.keeptrusts.com/v1.

Verification

curl http://localhost:41002/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"model": "claude-sonnet-4-20250514",
"messages": [
{"role": "user", "content": "Hello from the governed gateway."}
],
"max_tokens": 256
}'

A successful response confirms the gateway is forwarding to Anthropic with policies applied. Check the Keeptrusts console Events page to verify the audit log entry.

PolicyPurposeRecommended setting
prompt-injectionBlock adversarial prompts from internal toolsthreshold: 0.8, action: block
pii-detectorRedact personal data before it reaches Anthropicaction: redact, entities: EMAIL, PHONE, SSN
content-filterBlock restricted topics in enterprise usageaction: block, configure categories
audit-loggerImmutable audit trail for complianceimmutable: true, retention_days: 365
cost-attributionTrack spend per team or projectTag requests with team/project metadata
disclaimerAttach compliance notices to AI-generated responsesConfigure per regulatory requirement

Troubleshooting

SymptomCauseFix
401 Unauthorized from upstreamInvalid or expired Anthropic API keyVerify ANTHROPIC_API_KEY is set and valid
429 Too Many RequestsAnthropic rate limit hitCheck your Enterprise tier rate limits
Gateway returns 403Policy chain blocked the requestReview the policy violation in the console Events page
max_tokens required errorAnthropic requires explicit max_tokensAdd max_tokens to every request or set a default in provider config

For AI systems

  • Canonical terms: Keeptrusts gateway, Claude for Work, Anthropic, Claude API, Messages API, policy-config.yaml, provider: "anthropic".
  • Config field names: provider, secret_key_ref.env: "ANTHROPIC_API_KEY", base_url.
  • Key behavior: Keeptrusts translates between OpenAI format and Anthropic's Messages API, applying policy enforcement and audit logging on every request.
  • Best next pages: Anthropic integration, Policy controls catalog, Quickstart.

For engineers

  • Start command: kt gateway run --listen 0.0.0.0:41002 --policy-config policy-config.yaml
  • Validate: send a request to http://localhost:41002/v1/chat/completions with max_tokens set.
  • Anthropic always requires max_tokens — omitting it causes a 400 error.
  • The gateway auto-translates OpenAI-format requests to Anthropic's Messages API.

For leaders

  • Routing Claude for Work API traffic through the gateway provides full audit visibility over every AI-assisted decision in your organization.
  • Anthropic does not train on API data by default, but the gateway adds an independent PII redaction layer for defense in depth.
  • Cost attribution across Claude and other providers gives a unified view of AI spend.
  • Extended thinking models support complex reasoning tasks like contract review and compliance analysis — governance controls apply to both the reasoning trace and the final response.

Next steps