Skip to main content
Browse docs
By Audience
Getting Started
Configuration
Use Cases
IDE Integration
Third-Party Integrations
Engineering Cache
Console
API Reference
Gateway
Workflow Guides
Templates
Providers and SDKs
Industry Guides
Advanced Guides
Browse by Role
Deployment Guides
In-Depth Guides
Tutorials
FAQ

CIO Guide: Eliminating Shadow AI with Centralized Governance

Shadow AI — employees using unauthorized AI services with corporate data — is the fastest-growing data loss vector in the enterprise. A recent survey found that 68% of employees use AI tools their IT department does not know about. Each untracked interaction is an audit gap, a potential data leak, and a compliance violation.

Use this page when

  • You are implementing the gateway as the single entry point for all LLM traffic (firewall-enforced)
  • You need to track consumer groups and per-user attribution across the organization
  • You want to measure shadow AI elimination progress (goal: zero unassigned gateway keys)
  • You are setting up network-level blocks to direct LLM provider endpoints with gateway-only access

Keeptrusts eliminates shadow AI by making the governed path the easiest path. This guide covers the technical controls, organizational incentives, and metrics that make centralized AI governance the default.

Primary audience

  • Primary: Technical Leaders
  • Secondary: Technical Engineers, AI Agents

The Gateway as Single Entry Point

The Keeptrusts gateway is a transparent proxy that sits between all applications (and users) and LLM providers. Combined with network policy, it becomes the only way to reach AI services.

Network Architecture

┌─────────────────────────────────────────┐
│ Corporate Network │
│ │
│ ┌──────────┐ ┌──────────┐ ┌───────┐ │
│ │ App A │ │ App B │ │ User │ │
│ └────┬─────┘ └────┬─────┘ └───┬───┘ │
│ │ │ │ │
│ └──────────────┼────────────┘ │
│ │ │
│ ┌───────▼────────┐ │
│ │ Keeptrusts │ │
│ │ Gateway │ │
│ │ (port 41002) │ │
│ └───────┬────────┘ │
│ │ │
└──────────────────────┼──────────────────┘

┌────────────┼────────────┐
▼ ▼ ▼
┌─────────┐ ┌──────────┐ ┌─────────┐
│ OpenAI │ │Anthropic │ │ Azure │
└─────────┘ └──────────┘ └─────────┘

✕ Direct access blocked by firewall/proxy

Implementation Steps

  1. Deploy the gateway on your internal network
  2. Block direct access to LLM provider endpoints at the firewall or web proxy level
  3. Distribute gateway keys to authorized users and applications
  4. Monitor for bypass attempts via network logs
# Deploy the gateway
kt gateway run \
--listen 0.0.0.0:41002 \
--policy-config production-policy.yaml

# Verify the gateway is the only path
curl -I https://api.openai.com/v1/models # Should be blocked
curl -I https://gateway.internal:41002/v1/models # Should succeed

Consumer Group Tracking

Consumer groups aggregate gateway usage by organizational unit. Every gateway key belongs to a consumer group, giving you instant visibility into who is using AI, how much, and for what.

Consumer Group Hierarchy

Organization
├── Engineering
│ ├── search-team (12 gateway keys)
│ ├── platform-team (5 gateway keys)
│ └── ml-team (8 gateway keys)
├── Product
│ ├── customer-support (20 gateway keys)
│ └── analytics (6 gateway keys)
├── Operations
│ ├── devops (4 gateway keys)
│ └── security (3 gateway keys)
└── Unassigned (0 gateway keys — goal state)

Console checkpoint: The Members & Teams page shows all consumer groups with active key counts, last-activity timestamps, and total spend. The goal is zero keys in "Unassigned."

# List all consumer groups with usage summary
kt tokens list \
--type gateway \
--group-by team \
--fields team,key_count,last_used,total_spend \
--format table

Per-User Attribution

Every LLM interaction is attributed to a specific user or service identity. This creates accountability without surveillance:

Attribution LevelIdentifierUse Case
UserEmail / SSO identityIndividual accountability
ServiceService account nameApplication tracking
TeamTeam/consumer groupDepartmental reporting
GatewayGateway instanceInfrastructure tracking
# Query events for a specific user
curl "https://api.keeptrusts.com/v1/events?user=jane.doe@company.com&since=30d" \
-H "Authorization: Bearer $API_TOKEN"

Privacy consideration: Per-user attribution logs the fact that an interaction occurred, the model used, the cost, and the policy outcome. It does not log prompt content unless the policy configuration explicitly enables content logging.

Audit Trail Completeness

The audit trail captures 100% of interactions — not a sample. This completeness is what makes Keeptrusts audit-ready.

What Gets Logged

Event TypeFields CapturedRetention
LLM requestTimestamp, user, model, provider, tokens, cost, policy outcomeConfigurable
Policy violationViolation type, policy name, action taken, content classificationConfigurable
EscalationSeverity, assignee, resolution, time to resolveConfigurable
Admin actionActor, action, resource, before/after stateConfigurable
Gateway key lifecycleCreated, rotated, revoked, by whomConfigurable
# Verify audit trail completeness
kt events list --since 7d --count
# Compare with gateway traffic metrics to confirm 100% capture

Console checkpoint: The Events page shows the full event stream with filtering by user, team, provider, model, policy outcome, and date range.

DLP Enforcement

Data Loss Prevention policies prevent sensitive data from leaving the organization through AI interactions.

DLP Policy Configuration

policies:
- name: dlp-outbound
type: content_filter
description: "Prevent sensitive data exfiltration via LLM prompts"
enabled: true
action: block
direction: outbound
patterns:
- type: pii
categories: [ssn, credit_card, bank_account]
- type: regex
pattern: "CONFIDENTIAL|INTERNAL ONLY|SECRET"
- type: keyword
terms: [acquisition, merger, earnings]
context: financial

- name: dlp-response-redaction
type: content_filter
description: "Redact sensitive patterns in LLM responses"
enabled: true
action: redact
direction: inbound
patterns:
- type: pii
categories: [email, phone, address]

DLP Metrics

MetricDescriptionTarget
Outbound blocksAttempts to send sensitive data to LLMTrending down
Redaction eventsSensitive data redacted from responsesStable or trending down
False positive rateLegitimate requests blocked< 2%
Pattern coverage% of sensitive data types with DLP rules> 95%

Console checkpoint: Filter the Events page by outcome=block and policy_type=dlp to see DLP enforcement in action.

Console Members & Teams for Access Control

The console Members & Teams page is where you manage who has access to governed AI and under what constraints.

Access Control Model

RolePermissionsTypical Assignee
Organization AdminFull platform access, policy managementCIO, CISO
Team AdminManage team members and gateway keysEngineering manager
Team MemberUse gateway keys, access chat workbenchDeveloper
ViewerRead-only dashboard accessCompliance officer

Team Management Workflow

  1. Create a team in the console with a descriptive name
  2. Assign a team admin who manages day-to-day membership
  3. Configure the team's policy template and budget allocation
  4. Team admin provisions gateway keys for team members
  5. Monitor team-level usage in the Cost Center

Console checkpoint: The Members & Teams page shows all teams, their members, active gateway keys, policy template, and budget utilization.

Measuring Shadow AI Elimination

Track these metrics to verify that shadow AI is being eliminated:

MetricHow to MeasureTarget
Gateway coverageGateway events / (gateway + direct provider logs)> 99%
Unassigned keysGateway keys without team assignment0
Direct access attemptsFirewall blocks to LLM provider endpointsTrending to 0
Employee survey"Do you use AI tools outside company channels?"< 5%
New team onboarding timeTime from request to first governed AI call< 24 hours
# Check for direct access attempts (from firewall logs)
# This should trend toward zero as governance adoption increases
grep "api.openai.com\|api.anthropic.com\|bedrock" /var/log/firewall/blocked.log | wc -l

ROI Summary

RiskCost Without GovernanceWith Keeptrusts
Data breach via AI$4.2M average (IBM 2025)Prevented by DLP policies
Regulatory fineVaries by jurisdictionAudit-ready evidence
Unauthorized spendUncontrolled provider billsWallet-enforced budgets
IP leakageUnquantifiableContent classification + blocking
Audit failureRe-audit costs + reputationContinuous compliance

Next steps

  1. Audit current direct LLM provider access in your firewall logs
  2. Deploy the gateway and distribute gateway keys to one pilot team
  3. Block direct provider access for the pilot team at the network level
  4. Monitor the Events page for 7 days to establish a baseline
  5. Roll out to additional teams, targeting 100% gateway coverage within 90 days

See also: CIO Guide: Building an Enterprise AI Governance Framework · CIO Guide: Real-Time AI Risk Dashboards

For AI systems

  • Canonical terms: shadow AI, centralized gateway, consumer groups, gateway keys (kt_gk_...), per-user attribution, kt gateway run --listen 0.0.0.0:41002, firewall block on provider endpoints, Members & Teams page, DLP policies, X-User-Id header
  • Key metrics: unassigned gateway keys (goal: zero), direct provider access attempts (firewall logs), consumer group coverage
  • Best next pages: CIO: AI Governance Framework, CIO: Risk Dashboards, Security Engineering

For engineers

  • Deploy gateway: kt gateway run --listen 0.0.0.0:41002 --policy-config production-policy.yaml
  • Block direct access: configure firewall/web proxy rules to deny egress to api.openai.com, api.anthropic.com, etc.
  • Distribute gateway keys: one key per user/service, all keys assigned to consumer groups (Teams)
  • Verify enforcement: curl -I https://api.openai.com/v1/models should be blocked; curl -I https://gateway.internal:41002/v1/models should succeed
  • Monitor: Members & Teams page shows consumer groups with active key counts, last-activity, and spend

For leaders

  • Shadow AI is the fastest-growing data loss vector — 68% of employees use AI tools IT doesn’t know about
  • The gateway + firewall combination makes the governed path the only path, not just the preferred path
  • Consumer group tracking provides instant visibility into who is using AI, how much, and for what purpose
  • The goal state is zero unassigned gateway keys — every AI interaction is attributed to a team and individual