Skip to main content
Browse docs
By Audience
Getting Started
Configuration
Use Cases
IDE Integration
Third-Party Integrations
Engineering Cache
Console
API Reference
Gateway
Workflow Guides
Templates
Providers and SDKs
Industry Guides
Advanced Guides
Browse by Role
Deployment Guides
In-Depth Guides
Tutorials
FAQ

Centralize AI Observability Across All Teams

Most organizations have no idea how their teams use AI. Requests go directly to providers with no logging, no cost tracking, and no visibility into what data is being sent. Keeptrusts gives you a single pane of glass for every AI interaction across the organization.

Use this page when

  • You need a single view of all AI requests, spend, and policy decisions across teams and gateways.
  • You are setting up dashboards and exports for capacity planning, cost allocation, or compliance evidence.
  • You want to understand what data the gateway captures automatically and how to query it.

Primary audience

  • Primary: Technical Leaders
  • Secondary: Technical Engineers, AI Agents

What you'll achieve

  • Unified event stream for every AI request across all teams and gateways
  • Real-time spend tracking broken down by team, user, model, and provider
  • Policy decision visibility showing what was blocked, redacted, escalated, or allowed
  • Team-scoped dashboards so each team sees their own usage without access to others
  • Evidence export for compliance, security review, and capacity planning

The observability stack

Keeptrusts captures four categories of data automatically:

1. Events

Every request through the gateway generates an event that includes:

  • Request and response metadata (model, provider, token counts)
  • Policy evaluation outcomes for each policy in the chain
  • Redaction decisions and categories
  • Provider routing decisions (which provider was selected and why)
  • Latency breakdown (gateway processing, upstream provider, total)
  • Cost computation (input tokens, output tokens, total cost)

Console: Navigate to Events to filter, search, and inspect individual events.

CLI:

# Tail events in real time
kt events tail --format json

# Query events for a time range
kt events list \
--from "2026-04-01T00:00:00Z" \
--to "2026-04-23T23:59:59Z" \
--team engineering

See Events for the full event model.

2. Spend tracking

Every event with pricing data contributes to the spend ledger. The Spend page shows:

  • Total spend over configurable time ranges
  • Spend by team — identify which teams drive the most cost
  • Spend by model — see which models are most expensive
  • Spend by provider — compare costs across providers
  • Per-request cost — drill into individual expensive requests

See Cost and Spend for pricing configuration and spend analysis.

3. Policy outcomes

Every policy evaluation is recorded. Aggregate these to understand:

  • Block rate — what percentage of requests are being blocked?
  • Redaction rate — how much PII is being caught?
  • Escalation rate — how many decisions need human review?
  • False positive rate — are policies too aggressive?

Filter events by policy_type to drill into specific controls.

4. Quality metrics

If you deploy quality-scorer or citation-verifier, quality metrics are captured:

  • Output quality scores per request
  • Citation coverage and groundedness
  • Quality trends over time
  • Low-quality escalation frequency

Team-scoped views

Each team sees only their own data in the console. This is enforced through:

  • Team membership — users belong to teams and see team-scoped data
  • Gateway key scoping — gateway keys are bound to teams, attributing all traffic
  • Role-based access — viewers see events but can't modify configurations

Setting up team attribution

Ensure every request is attributed to a team by using scoped gateway keys:

# Create a gateway key for the data-science team
curl -X POST https://api.keeptrusts.com/v1/tokens \
-H "Authorization: Bearer $API_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"name": "data-science-gateway-key",
"token_type": "gateway",
"team_id": "data-science-team-id"
}'

Or pass attribution headers on each request:

curl -X POST http://localhost:8080/v1/chat/completions \
-H "X-Team-Id: data-science" \
-H "X-User-Id: analyst-jane" \
-H "Content-Type: application/json" \
-d '{
"model": "gpt-4o",
"messages": [{"role": "user", "content": "Analyze Q1 trends"}]
}'

Dashboard overview

The console Dashboard page provides an at-a-glance view of:

MetricWhat it tells you
Total requestsVolume of AI usage across the organization
Active teamsHow many teams are using AI
Policy blocksRequests stopped by policy controls
Escalations pendingHuman oversight items awaiting review
Total spend (period)Cost of AI usage for the selected time range
Top modelsMost-used models across the organization

See Overview Dashboard for configuration and customization.


Export and integration

Evidence export

Export observability data for compliance reviews, security audits, or capacity planning:

# Export all events for Q1
kt export create \
--format csv \
--from "2026-01-01T00:00:00Z" \
--to "2026-03-31T23:59:59Z"

# Export only a specific team's events
kt export create \
--format json \
--from "2026-04-01T00:00:00Z" \
--to "2026-04-30T23:59:59Z" \
--team data-science

Webhook integration

Forward events to external systems in real time:

# Create a webhook for security events
curl -X POST https://api.keeptrusts.com/v1/webhooks \
-H "Authorization: Bearer $API_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"url": "https://siem.example.com/intake/keeptrusts",
"events": ["policy_block", "escalation_created"],
"active": true
}'

See Webhooks for configuration details.


Observability-driven decisions

Use observability data to make informed governance decisions:

ObservationAction
One team drives 60% of spendAllocate team wallet, review model selection
PII redaction rate spikingInvestigate whether new data sources are entering prompts
Block rate too high (>10%)Review policy thresholds, check for false positives
Escalation backlog growingAdd reviewers or adjust escalation criteria
One model dominates usageEvaluate cheaper alternatives for some use cases
Provider latency increasingCheck provider health, consider adding failover targets

Quick wins

  1. Check the Dashboard — see aggregate AI usage across your organization right now
  2. Filter Events by team — understand which teams are the heaviest AI users
  3. Review the Spend page — identify your top cost drivers (model, team, provider)
  4. Create a team-scoped gateway key — start attributing traffic to the right teams
  5. Set up a webhook — forward security-relevant events to your SIEM

For AI systems

  • Canonical terms: events, spend tracking, policy outcomes, quality metrics, team-scoped dashboard, event export.
  • Console surfaces: Events page, Spend page, Exports page.
  • CLI commands: kt events tail, kt events list, kt export create.
  • Config keys: audit-logger, quality-scorer, citation-verifier.
  • Best next pages: Events, Cost and Spend, Exports.

For engineers

  • Prerequisites: gateway running with audit-logger in the policy chain; events are captured automatically.
  • Use kt events tail --format json to confirm events flow in real time.
  • Verify spend data: check the Spend page after a few requests to confirm cost per request is populated.
  • Filter Events by policy_type to validate specific policy outcomes (blocks, redactions, escalations).
  • Set up scheduled exports via kt export create for downstream SIEM or analytics pipelines.

For leaders

  • Centralized observability replaces blind-spot AI usage with full cost attribution per team and provider.
  • Team-scoped views maintain data isolation — each team sees only their own activity.
  • Evidence exports satisfy auditor requests in minutes rather than weeks of manual log collection.
  • Spend trend data supports budget forecasting and justifies cost-optimization investments.

Next steps