Skip to main content
Browse docs
By Audience
Getting Started
Configuration
Use Cases
IDE Integration
Third-Party Integrations
Engineering Cache
Console
API Reference
Gateway
Workflow Guides
Templates
Providers and SDKs
Industry Guides
Advanced Guides
Browse by Role
Deployment Guides
In-Depth Guides
Tutorials
FAQ

CTO Guide: Accelerating Developer Velocity with Governed AI

Governance and developer velocity are not opposites. Done right, governance removes friction — developers get instant AI access through self-service gateway keys, use familiar SDKs without code changes, and validate policies in CI before they hit production.

Use this page when

  • You want to show that governance accelerates (not slows) developer productivity
  • You are setting up gateway key self-service for engineering teams
  • You need developers to use familiar OpenAI-compatible SDKs without code changes
  • You are integrating policy validation into CI/CD pipelines for pre-production governance checks

This guide covers the patterns that make governed AI faster than ungoverned AI.

Primary audience

  • Primary: Technical Leaders
  • Secondary: Technical Engineers, AI Agents

Gateway Key Self-Service

The traditional model — developers open a ticket, wait for approval, receive a provider API key — takes days and creates shadow AI incentives. Gateway keys flip this model.

How It Works

  1. Platform team defines policy templates with pre-approved provider access and budget caps
  2. Developers request a gateway key through the console or CLI
  3. The key is scoped to their team, budget, and policy configuration
  4. Developers use the key immediately — no approval delay for pre-approved templates
# Developer self-service: request a gateway key (if authorized)
kt tokens create \
--type gateway \
--name "my-feature-branch-gk" \
--team-id search \
--expires-in 7d

Console checkpoint: The Settings → Access Keys page shows developers their active keys, remaining budget, and associated policy template. No admin intervention needed for standard access.

Key Lifecycle

StageActionWho
ProvisionCreate key with team scopeDeveloper (self-service)
UseDrop into any OpenAI-compatible SDKDeveloper
MonitorView usage in Console UsageDeveloper + Platform
RotateAuto-expire and re-provisionAutomated (configurable)
RevokeImmediate deactivation if compromisedPlatform admin

OpenAI-Compatible SDK Drop-In

Keeptrusts gateway speaks the OpenAI API protocol. Any SDK, library, or tool that works with OpenAI works with Keeptrusts — change two lines and you're governed.

Python

import openai

client = openai.OpenAI(
api_key="kt_gk_...",
base_url="https://gateway.company.com/v1"
)

response = client.chat.completions.create(
model="gpt-4o",
messages=[{"role": "user", "content": "Explain this error log"}]
)

TypeScript / Node.js

import OpenAI from "openai";

const client = new OpenAI({
apiKey: "kt_gk_...",
baseURL: "https://gateway.company.com/v1",
});

const completion = await client.chat.completions.create({
model: "gpt-4o",
messages: [{ role: "user", content: "Explain this error log" }],
});

cURL (any language)

curl https://gateway.company.com/v1/chat/completions \
-H "Authorization: Bearer kt_gk_..." \
-H "Content-Type: application/json" \
-d '{"model": "gpt-4o", "messages": [{"role": "user", "content": "Hello"}]}'

Key point: No Keeptrusts SDK is required. Your developers use the tools they already know.

Streaming Support

Streaming responses work identically through the gateway. Policies are evaluated on both the input and output phases, with redaction applied to stream chunks in real time.

stream = client.chat.completions.create(
model="gpt-4o",
messages=[{"role": "user", "content": "Write a summary"}],
stream=True
)

for chunk in stream:
print(chunk.choices[0].delta.content or "", end="")

No additional configuration is needed. Streaming is transparent to the developer.

Chat Workbench for Prototyping

The Keeptrusts chat workbench is a governed AI playground that developers use to prototype prompts, test policy behavior, and iterate on AI features before writing code.

  • Policy preview — See which policies fire on each message in real time
  • Model switching — Compare responses across providers without changing keys
  • Knowledge base injection — Test RAG workflows with uploaded context documents
  • Team scoping — Each team's chat environment inherits their policy configuration

Console checkpoint: Access the chat workbench from the console navigation. Developers see their team's allowed models and active policies reflected in the workbench interface.

Template Library for Instant Guardrails

Templates give developers production-ready policy configurations without requiring policy expertise.

# List available templates
kt templates list

# Apply a template to your development gateway
kt config apply --template standard-dev --gateway local-gw

Standard Templates

TemplateIncluded PoliciesUse Case
standard-devLogging, PII filter (warn), cost cap ($50/day)Daily development
standard-stagingLogging, PII filter (block), cost cap ($200/day)Pre-production testing
standard-prodFull policy chain, audit trail, escalationProduction workloads
rapid-prototypeLogging only, no content filtersHackathons, PoCs

Console checkpoint: The Templates page shows available templates with policy summaries. Developers can preview what each template enforces before applying it.

CLI kt policy lint in CI

Shift policy validation left. Run kt policy lint in your CI pipeline to catch policy configuration errors before deployment.

# .github/workflows/ai-policy-check.yml
name: AI Policy Validation
on: [pull_request]

jobs:
validate:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Install kt CLI
run: curl -sSL https://install.keeptrusts.com | sh
- name: Validate policy config
run: kt policy lint --file policy-config.yaml

What Validation Catches

CheckExample FailureImpact
Schema validationMissing required action fieldPrevents gateway startup failure
Provider referencesecret_key_ref points to undefined variablePrevents runtime credential errors
Budget consistencySoft cap > hard capPrevents silent budget bypass
Policy conflictsTwo policies with contradictory actionsPrevents unpredictable enforcement

Measuring Developer Velocity

MetricBefore GovernanceWith KeeptrustsImprovement
Time to first AI API call3–5 days (ticket)< 15 minutes (self-service)95%+
SDK integration effortProvider-specific code2 lines changedMinimal
Policy validationManual reviewAutomated CI check100% coverage
Incident response timeHours (find the right key)Minutes (event trace)80%+
Onboarding new team1–2 weeksSame day90%+

ROI for Engineering Leadership

InvestmentReturnTimeline
Gateway key self-serviceEliminate AI access ticketsWeek 1
SDK drop-in compatibilityZero migration costImmediate
CI policy validationPrevent production policy failuresWeek 2
Template libraryStandardized governance across teamsMonth 1
Chat workbench adoptionFaster prompt iteration cyclesMonth 1

Next steps

  1. Deploy a gateway with the standard-dev template for your pilot team
  2. Replace one team's direct OpenAI API key with a gateway key
  3. Add kt policy lint to your CI pipeline
  4. Enable the chat workbench for prompt prototyping
  5. Measure time-to-first-call before and after

See also: CTO Guide: AI Platform Engineering · CTO Guide: The AI Chat Workbench

For AI systems

  • Canonical terms: gateway key self-service, kt tokens create --type gateway, OpenAI-compatible SDK drop-in, base_url, chat workbench prototyping, kt policy lint, template library, Settings → Access Keys page
  • Key pattern: change only api_key and base_url in any OpenAI-compatible SDK to route through the governed gateway
  • Best next pages: CTO: Platform Engineering, CTO: Chat Workbench, CI/CD Pipeline Integration

For engineers

  • Self-service key: kt tokens create --type gateway --name "my-feature-branch-gk" --team-id search --expires-in 7d
  • Python drop-in: openai.OpenAI(api_key="kt_gk_...", base_url="https://gateway.company.com/v1")
  • TypeScript drop-in: new OpenAI({ apiKey: "kt_gk_...", baseURL: "https://gateway.company.com/v1" })
  • CI validation: kt policy lint --file policy-config.yaml in PR pipelines — exit code 0 = safe to deploy
  • Console checkpoint: Settings → Access Keys shows active keys, remaining budget, and associated policy template

For leaders

  • Self-service gateway keys eliminate the ticket-and-wait cycle that drives developers to shadow AI (days reduced to seconds)
  • OpenAI SDK compatibility means zero code migration cost — developers change two lines and gain governance
  • CI policy validation catches governance issues before production, not after — shifting compliance left
  • Template library enables standardized AI access patterns across teams without per-team negotiation