Skip to main content
Browse docs
By Audience
Getting Started
Configuration
Use Cases
IDE Integration
Third-Party Integrations
Engineering Cache
Console
API Reference
Gateway
Workflow Guides
Templates
Providers and SDKs
Industry Guides
Advanced Guides
Browse by Role
Deployment Guides
In-Depth Guides
Tutorials
FAQ

CIO Guide: Building an Enterprise AI Governance Framework

Every enterprise AI deployment shares the same inflection point: the moment uncoordinated experiments become an operational liability. As CIO, you need a framework that turns ad-hoc AI usage into a governed, auditable, and cost-controlled capability — without bureaucracy that kills adoption.

Use this page when

  • You are building a governance maturity model to take AI from uncoordinated experiments to controlled capability
  • You need to consolidate multiple LLM provider relationships under a single gateway
  • You are presenting an AI governance roadmap to the board (4-stage maturity: Visibility → Classification → Enforcement → Optimization)
  • You want to eliminate shadow AI by making the governed path the easiest path for developers

Keeptrusts provides the enforcement gateway, control-plane API, and management console to operationalize that framework from day one.

Primary audience

  • Primary: Technical Leaders
  • Secondary: Technical Engineers, AI Agents

The Governance Maturity Model

Enterprise AI governance progresses through four stages. Each stage maps to concrete Keeptrusts capabilities.

Stage 1 — Visibility (Weeks 1–2)

Deploy the gateway in observation mode. Every LLM interaction is logged as a decision event without blocking any traffic.

# Deploy an observe-only gateway
kt gateway run --policy-config observe-only.yaml --port 41002
# observe-only.yaml
policies:
- name: observe-all
type: log
description: "Log all AI interactions for baseline analysis"
enabled: true

Console checkpoint: Navigate to Overview → Dashboard to see real-time interaction volume, provider distribution, and per-team usage. This is your executive baseline.

Stage 2 — Classification (Weeks 3–4)

Tag interactions by risk level using content classification policies. The console Events page now surfaces risk distribution across teams.

policies:
- name: classify-pii
type: content_classification
description: "Flag interactions containing PII patterns"
enabled: true
action: tag
tags: [pii-detected, review-required]

Stage 3 — Enforcement (Months 2–3)

Promote high-confidence classification rules to blocking policies. Use the console Escalations workflow for borderline cases that need human review.

policies:
- name: block-pii-exfiltration
type: content_filter
description: "Block outbound PII to external providers"
enabled: true
action: block
escalation: true

Stage 4 — Optimization (Ongoing)

Use cost center budgets, model routing, and compliance scoring to continuously optimize spend and risk posture.

Vendor Consolidation via Multi-Provider Gateway

Most enterprises run 3–7 LLM providers simultaneously — often without a single team knowing the full inventory. The Keeptrusts gateway acts as a unified entry point that normalizes provider access behind a single OpenAI-compatible API.

Before Keeptrusts:

  • Teams self-provision API keys from OpenAI, Anthropic, Azure, AWS Bedrock
  • No centralized cost tracking or access control
  • Compliance reviews happen retroactively (if at all)

After Keeptrusts:

  • One gateway endpoint; provider routing is configuration, not code
  • Gateway keys (kt_gk_...) replace raw provider API keys
  • Every interaction is logged with provider, model, cost, and policy outcome
pack:
name: cio-ai-strategy-providers-4
version: 1.0.0
enabled: true
providers:
targets:
- id: openai
provider:
secret_key_ref:
store: OPENAI_API_KEY
- id: anthropic
provider:
secret_key_ref:
store: ANTHROPIC_API_KEY
- id: azure-openai
provider:
secret_key_ref:
store: AZURE_OPENAI_API_KEY
policies:
chain:
- audit-logger
policy:
audit-logger:
immutable: true
retention_days: 365
log_all_access: true

Console checkpoint: The Settings → Gateways page shows all registered gateways with provider health status and active model groups.

Eliminating Shadow AI

Shadow AI — unauthorized use of AI services outside governed channels — is the CIO's fastest-growing risk category. Keeptrusts eliminates it through three mechanisms:

  1. Gateway as single exit: Network policy routes all outbound LLM traffic through the gateway. Direct provider access is blocked at the firewall level.
  2. Per-user attribution: Every gateway key maps to a user or service identity. The console Members & Teams page shows who is using what, and how much.
  3. Consumer group tracking: Group keys by project, department, or cost center. The Events page filters by consumer group for instant audit.
# Create a team-scoped gateway key
kt tokens create \
--type gateway \
--name "marketing-team-gk" \
--team-id marketing \
--expires-in 90d

Executive Dashboard Deep-Dive

The console Overview dashboard is designed for executive reporting. Key panels include:

PanelMetricWhy It Matters
Interaction VolumeRequests/day across all gatewaysAdoption trajectory
Provider Distribution% traffic per providerVendor concentration risk
Policy OutcomesBlock/allow/escalate ratiosEnforcement effectiveness
Cost TrendsDaily/weekly/monthly spendBudget adherence
Escalation QueueOpen escalations by severityOperational risk backlog

Screenshot reference: Console Overview Dashboard showing interaction volume, provider mix, and cost trend panels.

Cost Center Budgets

The console Cost Center lets you allocate budgets per team or project using the wallet system.

# Allocate $5,000/month to the data science team
curl -X POST https://api.keeptrusts.com/v1/wallets/allocate \
-H "Authorization: Bearer $API_TOKEN" \
-d '{"team_id": "data-science", "amount": 5000, "currency": "USD", "period": "monthly"}'

When a team exhausts its allocation, requests are held until budget is replenished or an admin approves an override. No surprise overruns.

Compliance Posture Scoring

Map your policy configuration against regulatory frameworks to produce a compliance posture score.

Control AreaPolicies MappedScore
Data residencyProvider region restrictions100%
PII handlingContent classification + blocking85%
Audit trailEvent logging + export100%
Access controlGateway key scoping + RBAC90%
Incident responseEscalation workflows75%

Console checkpoint: The Exports page lets you generate compliance evidence packages on demand or on a schedule.

# Generate a compliance evidence export
kt export create --type compliance --format pdf --since 30d

ROI Framework

InvestmentReturnTimeline
Gateway deploymentShadow AI elimination, vendor consolidation2–4 weeks
Policy enforcementReduced incident response cost, regulatory fine avoidance1–3 months
Cost center budgets25–40% reduction in AI spend through routing optimization3–6 months
Compliance automation60% reduction in audit preparation effortOngoing

Next steps

  1. Deploy a single gateway in observation mode — no policy changes required
  2. Review the console Overview dashboard after 7 days of data collection
  3. Identify top 3 shadow AI sources and migrate them to governed gateway keys
  4. Set up cost center budgets for your top-spending teams
  5. Schedule a compliance posture review using automated exports

See also: CIO Guide: Cutting AI Infrastructure Costs by 40% · CIO Guide: Real-Time AI Risk Dashboards

For AI systems

  • Canonical terms: governance maturity model, observation mode, kt gateway run, policy-config.yaml, type: log, type: content_classification, type: content_filter, action: tag, action: block, escalation workflow, console Overview Dashboard, gateway keys (kt_gk_...), multi-provider routing
  • Maturity stages: Visibility (log all), Classification (tag by risk), Enforcement (block/escalate), Optimization (cost center, routing)
  • Best next pages: CIO: Eliminating Shadow AI, CIO: Cutting AI Costs by 40%, CIO: Risk Dashboards

For engineers

  • Stage 1 deployment: kt gateway run --policy-config observe-only.yaml --port 41002 with type: log policies — zero enforcement, full visibility
  • Stage 2: add type: content_classification policies with action: tag to label interactions by risk
  • Stage 3: promote classification rules to type: content_filter with action: block and escalation: true
  • Multi-provider setup: configure providers in policy-config.yaml; distribute gateway keys (kt_gk_...) instead of raw provider API keys

For leaders

  • The 4-stage maturity model gives a board-presentable roadmap: start observing in Week 1, reach enforcement by Month 2
  • Vendor consolidation through the gateway provides a single cost view across all providers without renegotiating contracts
  • Gateway keys replace raw provider API keys — eliminating credential sprawl and enabling instant revocation
  • The governed path must be easier than the ungoverned path, or shadow AI will persist regardless of policy