CIO Guide: Eliminating Shadow AI with Centralized Governance
Shadow AI — employees using unauthorized AI services with corporate data — is the fastest-growing data loss vector in the enterprise. A recent survey found that 68% of employees use AI tools their IT department does not know about. Each untracked interaction is an audit gap, a potential data leak, and a compliance violation.
Use this page when
- You are implementing the gateway as the single entry point for all LLM traffic (firewall-enforced)
- You need to track consumer groups and per-user attribution across the organization
- You want to measure shadow AI elimination progress (goal: zero unassigned gateway keys)
- You are setting up network-level blocks to direct LLM provider endpoints with gateway-only access
Keeptrusts eliminates shadow AI by making the governed path the easiest path. This guide covers the technical controls, organizational incentives, and metrics that make centralized AI governance the default.
Primary audience
- Primary: Technical Leaders
- Secondary: Technical Engineers, AI Agents
The Gateway as Single Entry Point
The Keeptrusts gateway is a transparent proxy that sits between all applications (and users) and LLM providers. Combined with network policy, it becomes the only way to reach AI services.
Network Architecture
┌─────────────────────────────────────────┐
│ Corporate Network │
│ │
│ ┌──────────┐ ┌──────────┐ ┌───────┐ │
│ │ App A │ │ App B │ │ User │ │
│ └────┬─────┘ └────┬─────┘ └───┬───┘ │
│ │ │ │ │
│ └──────────────┼────────────┘ │
│ │ │
│ ┌───────▼────────┐ │
│ │ Keeptrusts │ │
│ │ Gateway │ │
│ │ (port 41002) │ │
│ └───────┬────────┘ │
│ │ │
└──────────────────────┼──────────────────┘
│
┌────────────┼────────────┐
▼ ▼ ▼
┌─────────┐ ┌──────────┐ ┌─────────┐
│ OpenAI │ │Anthropic │ │ Azure │
└─────────┘ └──────────┘ └─────────┘
✕ Direct access blocked by firewall/proxy
Implementation Steps
- Deploy the gateway on your internal network
- Block direct access to LLM provider endpoints at the firewall or web proxy level
- Distribute gateway keys to authorized users and applications
- Monitor for bypass attempts via network logs
# Deploy the gateway
kt gateway run \
--listen 0.0.0.0:41002 \
--policy-config production-policy.yaml
# Verify the gateway is the only path
curl -I https://api.openai.com/v1/models # Should be blocked
curl -I https://gateway.internal:41002/v1/models # Should succeed
Consumer Group Tracking
Consumer groups aggregate gateway usage by organizational unit. Every gateway key belongs to a consumer group, giving you instant visibility into who is using AI, how much, and for what.
Consumer Group Hierarchy
Organization
├── Engineering
│ ├── search-team (12 gateway keys)
│ ├── platform-team (5 gateway keys)
│ └── ml-team (8 gateway keys)
├── Product
│ ├── customer-support (20 gateway keys)
│ └── analytics (6 gateway keys)
├── Operations
│ ├── devops (4 gateway keys)
│ └── security (3 gateway keys)
└── Unassigned (0 gateway keys — goal state)
Console checkpoint: The Members & Teams page shows all consumer groups with active key counts, last-activity timestamps, and total spend. The goal is zero keys in "Unassigned."
# List all consumer groups with usage summary
kt tokens list \
--type gateway \
--group-by team \
--fields team,key_count,last_used,total_spend \
--format table
Per-User Attribution
Every LLM interaction is attributed to a specific user or service identity. This creates accountability without surveillance:
| Attribution Level | Identifier | Use Case |
|---|---|---|
| User | Email / SSO identity | Individual accountability |
| Service | Service account name | Application tracking |
| Team | Team/consumer group | Departmental reporting |
| Gateway | Gateway instance | Infrastructure tracking |
# Query events for a specific user
curl "https://api.keeptrusts.com/v1/events?user=jane.doe@company.com&since=30d" \
-H "Authorization: Bearer $API_TOKEN"
Privacy consideration: Per-user attribution logs the fact that an interaction occurred, the model used, the cost, and the policy outcome. It does not log prompt content unless the policy configuration explicitly enables content logging.
Audit Trail Completeness
The audit trail captures 100% of interactions — not a sample. This completeness is what makes Keeptrusts audit-ready.
What Gets Logged
| Event Type | Fields Captured | Retention |
|---|---|---|
| LLM request | Timestamp, user, model, provider, tokens, cost, policy outcome | Configurable |
| Policy violation | Violation type, policy name, action taken, content classification | Configurable |
| Escalation | Severity, assignee, resolution, time to resolve | Configurable |
| Admin action | Actor, action, resource, before/after state | Configurable |
| Gateway key lifecycle | Created, rotated, revoked, by whom | Configurable |
# Verify audit trail completeness
kt events list --since 7d --count
# Compare with gateway traffic metrics to confirm 100% capture
Console checkpoint: The Events page shows the full event stream with filtering by user, team, provider, model, policy outcome, and date range.
DLP Enforcement
Data Loss Prevention policies prevent sensitive data from leaving the organization through AI interactions.
DLP Policy Configuration
policies:
- name: dlp-outbound
type: content_filter
description: "Prevent sensitive data exfiltration via LLM prompts"
enabled: true
action: block
direction: outbound
patterns:
- type: pii
categories: [ssn, credit_card, bank_account]
- type: regex
pattern: "CONFIDENTIAL|INTERNAL ONLY|SECRET"
- type: keyword
terms: [acquisition, merger, earnings]
context: financial
- name: dlp-response-redaction
type: content_filter
description: "Redact sensitive patterns in LLM responses"
enabled: true
action: redact
direction: inbound
patterns:
- type: pii
categories: [email, phone, address]
DLP Metrics
| Metric | Description | Target |
|---|---|---|
| Outbound blocks | Attempts to send sensitive data to LLM | Trending down |
| Redaction events | Sensitive data redacted from responses | Stable or trending down |
| False positive rate | Legitimate requests blocked | < 2% |
| Pattern coverage | % of sensitive data types with DLP rules | > 95% |
Console checkpoint: Filter the Events page by outcome=block and policy_type=dlp to see DLP enforcement in action.
Console Members & Teams for Access Control
The console Members & Teams page is where you manage who has access to governed AI and under what constraints.
Access Control Model
| Role | Permissions | Typical Assignee |
|---|---|---|
| Organization Admin | Full platform access, policy management | CIO, CISO |
| Team Admin | Manage team members and gateway keys | Engineering manager |
| Team Member | Use gateway keys, access chat workbench | Developer |
| Viewer | Read-only dashboard access | Compliance officer |
Team Management Workflow
- Create a team in the console with a descriptive name
- Assign a team admin who manages day-to-day membership
- Configure the team's policy template and budget allocation
- Team admin provisions gateway keys for team members
- Monitor team-level usage in the Cost Center
Console checkpoint: The Members & Teams page shows all teams, their members, active gateway keys, policy template, and budget utilization.
Measuring Shadow AI Elimination
Track these metrics to verify that shadow AI is being eliminated:
| Metric | How to Measure | Target |
|---|---|---|
| Gateway coverage | Gateway events / (gateway + direct provider logs) | > 99% |
| Unassigned keys | Gateway keys without team assignment | 0 |
| Direct access attempts | Firewall blocks to LLM provider endpoints | Trending to 0 |
| Employee survey | "Do you use AI tools outside company channels?" | < 5% |
| New team onboarding time | Time from request to first governed AI call | < 24 hours |
# Check for direct access attempts (from firewall logs)
# This should trend toward zero as governance adoption increases
grep "api.openai.com\|api.anthropic.com\|bedrock" /var/log/firewall/blocked.log | wc -l
ROI Summary
| Risk | Cost Without Governance | With Keeptrusts |
|---|---|---|
| Data breach via AI | $4.2M average (IBM 2025) | Prevented by DLP policies |
| Regulatory fine | Varies by jurisdiction | Audit-ready evidence |
| Unauthorized spend | Uncontrolled provider bills | Wallet-enforced budgets |
| IP leakage | Unquantifiable | Content classification + blocking |
| Audit failure | Re-audit costs + reputation | Continuous compliance |
Next steps
- Audit current direct LLM provider access in your firewall logs
- Deploy the gateway and distribute gateway keys to one pilot team
- Block direct provider access for the pilot team at the network level
- Monitor the Events page for 7 days to establish a baseline
- Roll out to additional teams, targeting 100% gateway coverage within 90 days
See also: CIO Guide: Building an Enterprise AI Governance Framework · CIO Guide: Real-Time AI Risk Dashboards
For AI systems
- Canonical terms: shadow AI, centralized gateway, consumer groups, gateway keys (
kt_gk_...), per-user attribution,kt gateway run --listen 0.0.0.0:41002, firewall block on provider endpoints, Members & Teams page, DLP policies,X-User-Idheader - Key metrics: unassigned gateway keys (goal: zero), direct provider access attempts (firewall logs), consumer group coverage
- Best next pages: CIO: AI Governance Framework, CIO: Risk Dashboards, Security Engineering
For engineers
- Deploy gateway:
kt gateway run --listen 0.0.0.0:41002 --policy-config production-policy.yaml - Block direct access: configure firewall/web proxy rules to deny egress to
api.openai.com,api.anthropic.com, etc. - Distribute gateway keys: one key per user/service, all keys assigned to consumer groups (Teams)
- Verify enforcement:
curl -I https://api.openai.com/v1/modelsshould be blocked;curl -I https://gateway.internal:41002/v1/modelsshould succeed - Monitor: Members & Teams page shows consumer groups with active key counts, last-activity, and spend
For leaders
- Shadow AI is the fastest-growing data loss vector — 68% of employees use AI tools IT doesn’t know about
- The gateway + firewall combination makes the governed path the only path, not just the preferred path
- Consumer group tracking provides instant visibility into who is using AI, how much, and for what purpose
- The goal state is zero unassigned gateway keys — every AI interaction is attributed to a team and individual