Migrate from Direct API Calls to Governed AI in 30 Minutes
If your applications call OpenAI, Anthropic, or other LLM providers directly, you have no visibility, no controls, and no audit trail. This guide walks you through migrating to Keeptrusts in 30 minutes — with zero application downtime and minimal code changes.
Use this page when
- Your applications call OpenAI, Anthropic, or other providers directly and you want to add governance without rewriting code.
- You need a step-by-step migration path that achieves zero application downtime.
- You want to start with logging-only and incrementally add policy enforcement over time.
Primary audience
- Primary: Technical Leaders
- Secondary: Technical Engineers, AI Agents
What you'll achieve
- Full governance over every AI request — logging, policy enforcement, cost tracking
- Zero application downtime — the migration is a URL swap, not a rewrite
- Immediate visibility — see every request in the console within seconds of migration
- Incremental policy rollout — start with logging, then add controls over time
- Multi-provider resilience — add failover providers alongside your existing one
Before you start
You need:
- A running Keeptrusts stack (self-hosted or managed) — see Quickstart
- Your current provider API key (e.g.,
OPENAI_API_KEY) - 30 minutes
Step 1: Start the gateway (5 minutes)
Create a minimal policy-config.yaml that logs everything but blocks nothing, then start the gateway against that config:
pack:
name: migration-starter
version: "1.0"
description: Logging-only config for initial migration
policies:
chain:
- audit-logger
policy:
audit-logger:
retention_days: 90
providers:
targets:
- id: openai-primary
provider: openai
model: gpt-4o
base_url: https://api.openai.com
secret_key_ref:
env: OPENAI_API_KEY
export OPENAI_API_KEY="sk-your-openai-key"
kt gateway run \
--listen 0.0.0.0:41002 \
--policy-config policy-config.yaml
The gateway now reads the provider target from providers.targets[], so the migration remains a base-URL swap in the application without relying on runtime upstream overrides.
Verify the gateway is running:
curl http://localhost:8080/keeptrusts/health
# Expected: {"status":"ok"}
Step 2: Update your application (5 minutes)
The migration requires changing only the base URL in your application. The request format is identical — Keeptrusts speaks the same API as your upstream provider.
Python (OpenAI SDK)
Before:
from openai import OpenAI
client = OpenAI(api_key="sk-...")
After:
from openai import OpenAI
client = OpenAI(
api_key="sk-...",
base_url="http://localhost:8080/v1"
)
Python (Anthropic SDK)
Before:
import anthropic
client = anthropic.Anthropic(api_key="sk-ant-...")
After:
import anthropic
client = anthropic.Anthropic(
api_key="sk-ant-...",
base_url="http://localhost:8080"
)
Node.js (OpenAI SDK)
Before:
import OpenAI from 'openai';
const client = new OpenAI({ apiKey: 'sk-...' });
After:
import OpenAI from 'openai';
const client = new OpenAI({
apiKey: 'sk-...',
baseURL: 'http://localhost:8080/v1',
});
cURL
Before:
curl https://api.openai.com/v1/chat/completions \
-H "Authorization: Bearer sk-..." \
-H "Content-Type: application/json" \
-d '{"model": "gpt-4o", "messages": [{"role": "user", "content": "Hello"}]}'
After:
curl http://localhost:8080/v1/chat/completions \
-H "Authorization: Bearer sk-..." \
-H "Content-Type: application/json" \
-d '{"model": "gpt-4o", "messages": [{"role": "user", "content": "Hello"}]}'
That's it. One URL change. No other code modifications needed.
Step 3: Verify in the console (5 minutes)
- Open the Keeptrusts console in your browser
- Navigate to Events
- Send a request through your application
- Confirm the event appears in the Events list within seconds
You should see:
- The request model and provider
- Token counts (input and output)
- Policy outcomes (should show
audit-logger: pass) - Latency breakdown
Step 4: Add observability policies (5 minutes)
Now that traffic is flowing, add policies that observe without blocking:
pack:
name: migration-observe
version: "2.0"
policies:
chain:
- pii-detector
- prompt-injection
- audit-logger
policy:
pii-detector:
action: log
prompt-injection:
response:
action: log
audit-logger:
retention_days: 90
Reload the gateway configuration:
# If using file-based config, restart the gateway
# Or use the config reload endpoint
curl -X POST http://localhost:8080/keeptrusts/config/reload
Now you'll see PII detection and prompt injection events in the console — without blocking any traffic.
Step 5: Enable enforcement (10 minutes)
After reviewing the observability data (recommended: at least 24 hours), upgrade to enforcement:
pack:
name: migration-enforce
version: "3.0"
policies:
chain:
- pii-detector
- prompt-injection
- audit-logger
policy:
pii-detector:
action: redact
redaction:
marker_format: label
include_metadata: true
prompt-injection:
embedding_threshold: 0.8
response:
action: block
audit-logger:
retention_days: 365
Step 6: Add a failover provider (bonus)
While you're migrating, add a second provider for resilience:
pack:
name: migrate-from-direct-api-providers-4
version: 1.0.0
enabled: true
providers:
targets:
- id: primary-openai
provider: openai
model: gpt-4o
secret_key_ref:
env: OPENAI_API_KEY
- id: fallback-azure
provider: azure-openai
model: gpt-4o
base_url: https://my-resource.openai.azure.com
secret_key_ref:
env: AZURE_OPENAI_KEY
policies:
chain:
- audit-logger
policy:
audit-logger:
immutable: true
retention_days: 365
log_all_access: true
Now your application has automatic failover — something that would have required significant code changes without Keeptrusts.
Migration checklist
| Step | Time | Status |
|---|---|---|
| Start gateway with logging-only config | 5 min | ☐ |
| Update application base URL | 5 min | ☐ |
| Verify events appear in console | 5 min | ☐ |
| Add observability policies (log mode) | 5 min | ☐ |
| Review observability data (recommended: 24h) | — | ☐ |
| Enable enforcement (redact/block) | 10 min | ☐ |
| Add failover provider (optional) | 5 min | ☐ |
Common migration questions
Does this add latency? The gateway adds 1–5ms of processing overhead. Provider routing and policy evaluation are optimized for minimal latency impact.
Do I need to change my API key? No. Your existing provider API key works through the gateway. Optionally, you can use Keeptrusts gateway keys for team attribution.
Does streaming still work? Yes. The gateway supports SSE streaming with full policy evaluation on streamed responses.
What if the gateway goes down? Configure your application to fall back to direct provider access. The gateway is stateless — restarting it restores full functionality immediately.
Can I migrate one application at a time? Yes. Each application independently points to the gateway or the provider directly. Migrate incrementally.
Quick wins
- Migrate one application today — pick the simplest one and do the URL swap
- Check the console — see your first event within seconds
- Enable PII detection in log mode — discover what sensitive data your apps are sending
- Add pricing blocks — start tracking costs per request immediately
- Share the console with your team — demonstrate governance visibility
For AI systems
- Canonical terms: gateway, policy-config.yaml, audit-logger, base_url swap, provider target, secret_key_ref.
- CLI commands:
kt gateway run --listen 0.0.0.0:41002 --policy-config policy-config.yaml. - Health check:
GET /keeptrusts/healthreturns{"status":"ok"}. - SDK migration: change
base_urlto gateway URL andapi_keyto gateway key — no other code changes. - Best next pages: Quickstart, Developer Experience, Reduce AI Spend.
For engineers
- Prerequisites: Keeptrusts stack running, provider API key (e.g.,
OPENAI_API_KEY) available. - Step 1: Create a minimal
policy-config.yamlwithaudit-loggeronly (logging, no blocking). - Step 2: Run
kt gateway run --listen 0.0.0.0:41002 --policy-config policy-config.yaml. - Step 3: Update application
base_urlto the gateway URL — verify withcurl /keeptrusts/health. - Validate: confirm events appear in the console Events page within seconds of your first migrated request.
- Add policies incrementally (PII detection, injection defense) once logging confirms traffic flows correctly.
For leaders
- Migration is a 30-minute infrastructure change with zero application downtime — no sprint planning needed.
- Immediate visibility into every AI request gives you the data to justify further governance investment.
- Incremental policy rollout (logging → alerting → blocking) avoids disrupting existing workflows.
- Once migrated, you unlock cost optimization, compliance evidence, and multi-provider resilience.
Next steps
- Quickstart — initial platform setup
- Reduce AI Spend — optimize costs now that you have visibility
- Prevent Data Leaks — enable PII protection
- Multi-Provider Resilience — add failover providers
- Accelerate AI Adoption — onboard more teams