CTO Guide: Accelerating Developer Velocity with Governed AI
Governance and developer velocity are not opposites. Done right, governance removes friction — developers get instant AI access through self-service gateway keys, use familiar SDKs without code changes, and validate policies in CI before they hit production.
Use this page when
- You want to show that governance accelerates (not slows) developer productivity
- You are setting up gateway key self-service for engineering teams
- You need developers to use familiar OpenAI-compatible SDKs without code changes
- You are integrating policy validation into CI/CD pipelines for pre-production governance checks
This guide covers the patterns that make governed AI faster than ungoverned AI.
Primary audience
- Primary: Technical Leaders
- Secondary: Technical Engineers, AI Agents
Gateway Key Self-Service
The traditional model — developers open a ticket, wait for approval, receive a provider API key — takes days and creates shadow AI incentives. Gateway keys flip this model.
How It Works
- Platform team defines policy templates with pre-approved provider access and budget caps
- Developers request a gateway key through the console or CLI
- The key is scoped to their team, budget, and policy configuration
- Developers use the key immediately — no approval delay for pre-approved templates
# Developer self-service: request a gateway key (if authorized)
kt tokens create \
--type gateway \
--name "my-feature-branch-gk" \
--team-id search \
--expires-in 7d
Console checkpoint: The Settings → Access Keys page shows developers their active keys, remaining budget, and associated policy template. No admin intervention needed for standard access.
Key Lifecycle
| Stage | Action | Who |
|---|---|---|
| Provision | Create key with team scope | Developer (self-service) |
| Use | Drop into any OpenAI-compatible SDK | Developer |
| Monitor | View usage in Console Usage | Developer + Platform |
| Rotate | Auto-expire and re-provision | Automated (configurable) |
| Revoke | Immediate deactivation if compromised | Platform admin |
OpenAI-Compatible SDK Drop-In
Keeptrusts gateway speaks the OpenAI API protocol. Any SDK, library, or tool that works with OpenAI works with Keeptrusts — change two lines and you're governed.
Python
import openai
client = openai.OpenAI(
api_key="kt_gk_...",
base_url="https://gateway.company.com/v1"
)
response = client.chat.completions.create(
model="gpt-4o",
messages=[{"role": "user", "content": "Explain this error log"}]
)
TypeScript / Node.js
import OpenAI from "openai";
const client = new OpenAI({
apiKey: "kt_gk_...",
baseURL: "https://gateway.company.com/v1",
});
const completion = await client.chat.completions.create({
model: "gpt-4o",
messages: [{ role: "user", content: "Explain this error log" }],
});
cURL (any language)
curl https://gateway.company.com/v1/chat/completions \
-H "Authorization: Bearer kt_gk_..." \
-H "Content-Type: application/json" \
-d '{"model": "gpt-4o", "messages": [{"role": "user", "content": "Hello"}]}'
Key point: No Keeptrusts SDK is required. Your developers use the tools they already know.
Streaming Support
Streaming responses work identically through the gateway. Policies are evaluated on both the input and output phases, with redaction applied to stream chunks in real time.
stream = client.chat.completions.create(
model="gpt-4o",
messages=[{"role": "user", "content": "Write a summary"}],
stream=True
)
for chunk in stream:
print(chunk.choices[0].delta.content or "", end="")
No additional configuration is needed. Streaming is transparent to the developer.
Chat Workbench for Prototyping
The Keeptrusts chat workbench is a governed AI playground that developers use to prototype prompts, test policy behavior, and iterate on AI features before writing code.
- Policy preview — See which policies fire on each message in real time
- Model switching — Compare responses across providers without changing keys
- Knowledge base injection — Test RAG workflows with uploaded context documents
- Team scoping — Each team's chat environment inherits their policy configuration
Console checkpoint: Access the chat workbench from the console navigation. Developers see their team's allowed models and active policies reflected in the workbench interface.
Template Library for Instant Guardrails
Templates give developers production-ready policy configurations without requiring policy expertise.
# List available templates
kt templates list
# Apply a template to your development gateway
kt config apply --template standard-dev --gateway local-gw
Standard Templates
| Template | Included Policies | Use Case |
|---|---|---|
standard-dev | Logging, PII filter (warn), cost cap ($50/day) | Daily development |
standard-staging | Logging, PII filter (block), cost cap ($200/day) | Pre-production testing |
standard-prod | Full policy chain, audit trail, escalation | Production workloads |
rapid-prototype | Logging only, no content filters | Hackathons, PoCs |
Console checkpoint: The Templates page shows available templates with policy summaries. Developers can preview what each template enforces before applying it.
CLI kt policy lint in CI
Shift policy validation left. Run kt policy lint in your CI pipeline to catch policy configuration errors before deployment.
# .github/workflows/ai-policy-check.yml
name: AI Policy Validation
on: [pull_request]
jobs:
validate:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Install kt CLI
run: curl -sSL https://install.keeptrusts.com | sh
- name: Validate policy config
run: kt policy lint --file policy-config.yaml
What Validation Catches
| Check | Example Failure | Impact |
|---|---|---|
| Schema validation | Missing required action field | Prevents gateway startup failure |
| Provider reference | secret_key_ref points to undefined variable | Prevents runtime credential errors |
| Budget consistency | Soft cap > hard cap | Prevents silent budget bypass |
| Policy conflicts | Two policies with contradictory actions | Prevents unpredictable enforcement |
Measuring Developer Velocity
| Metric | Before Governance | With Keeptrusts | Improvement |
|---|---|---|---|
| Time to first AI API call | 3–5 days (ticket) | < 15 minutes (self-service) | 95%+ |
| SDK integration effort | Provider-specific code | 2 lines changed | Minimal |
| Policy validation | Manual review | Automated CI check | 100% coverage |
| Incident response time | Hours (find the right key) | Minutes (event trace) | 80%+ |
| Onboarding new team | 1–2 weeks | Same day | 90%+ |
ROI for Engineering Leadership
| Investment | Return | Timeline |
|---|---|---|
| Gateway key self-service | Eliminate AI access tickets | Week 1 |
| SDK drop-in compatibility | Zero migration cost | Immediate |
| CI policy validation | Prevent production policy failures | Week 2 |
| Template library | Standardized governance across teams | Month 1 |
| Chat workbench adoption | Faster prompt iteration cycles | Month 1 |
Next steps
- Deploy a gateway with the
standard-devtemplate for your pilot team - Replace one team's direct OpenAI API key with a gateway key
- Add
kt policy lintto your CI pipeline - Enable the chat workbench for prompt prototyping
- Measure time-to-first-call before and after
See also: CTO Guide: AI Platform Engineering · CTO Guide: The AI Chat Workbench
For AI systems
- Canonical terms: gateway key self-service,
kt tokens create --type gateway, OpenAI-compatible SDK drop-in,base_url, chat workbench prototyping,kt policy lint, template library, Settings → Access Keys page - Key pattern: change only
api_keyandbase_urlin any OpenAI-compatible SDK to route through the governed gateway - Best next pages: CTO: Platform Engineering, CTO: Chat Workbench, CI/CD Pipeline Integration
For engineers
- Self-service key:
kt tokens create --type gateway --name "my-feature-branch-gk" --team-id search --expires-in 7d - Python drop-in:
openai.OpenAI(api_key="kt_gk_...", base_url="https://gateway.company.com/v1") - TypeScript drop-in:
new OpenAI({ apiKey: "kt_gk_...", baseURL: "https://gateway.company.com/v1" }) - CI validation:
kt policy lint --file policy-config.yamlin PR pipelines — exit code 0 = safe to deploy - Console checkpoint: Settings → Access Keys shows active keys, remaining budget, and associated policy template
For leaders
- Self-service gateway keys eliminate the ticket-and-wait cycle that drives developers to shadow AI (days reduced to seconds)
- OpenAI SDK compatibility means zero code migration cost — developers change two lines and gain governance
- CI policy validation catches governance issues before production, not after — shifting compliance left
- Template library enables standardized AI access patterns across teams without per-team negotiation