Skip to main content
Browse docs
By Audience
Getting Started
Configuration
Use Cases
IDE Integration
Third-Party Integrations
Engineering Cache
Console
API Reference
Gateway
Workflow Guides
Templates
Providers and SDKs
Industry Guides
Advanced Guides
Browse by Role
Deployment Guides
In-Depth Guides
Tutorials
FAQ

Chat Analytics & Usage Insights

The Keeptrusts platform provides comprehensive analytics for chat workbench activity. Every conversation, message, and policy interaction is recorded as a decision event, giving you full visibility into how your teams use AI chat and where governance policies are having an impact.

Use this page when

  • You need to monitor conversation volume, token consumption, and cost across your organization.
  • You are tracking policy trigger rates to decide whether policies need tuning.
  • You want to understand model usage distribution and identify cost optimization opportunities.
  • You need to build custom analytics dashboards from exported chat event data.

Primary audience

  • Primary: Platform Administrators monitoring chat adoption, Technical Leaders tracking costs
  • Secondary: Compliance Officers auditing usage trends, AI Engineers optimizing model selection

Where to Find Chat Analytics

Chat analytics are available in the Keeptrusts management console:

  1. Navigate to Dashboard for high-level overview metrics.
  2. Open Events for detailed per-conversation event logs.
  3. Use Spend for cost-focused analytics and wallet tracking.

All chat analytics respect team and role boundaries — users see metrics for conversations they have access to, while administrators see organization-wide data.

Conversation Metrics

Message Volume

Track the number of messages across your organization:

  • Total messages: Count of all prompts and responses.
  • Messages per user: Identify active and inactive users.
  • Messages per team: Compare team-level adoption.
  • Messages over time: Spot trends in daily, weekly, or monthly usage.

Conversation Depth

Understand how users interact with AI:

  • Average turns per conversation: How many back-and-forth exchanges occur.
  • Conversation duration: Time from first message to last activity.
  • Abandoned conversations: Sessions with only one or two messages.
  • Resumed conversations: How often users return to previous threads.

Model Distribution

See which models your users prefer:

  • Messages per model: Distribution across configured LLM providers.
  • Model switching: How often users change models within a conversation.
  • Model availability: Uptime and error rates per provider.

Token Usage Tracking

Token consumption directly affects costs and is tracked at multiple levels.

Per-Conversation Token Metrics

Each conversation records:

MetricDescription
Input tokensTokens in the user's prompts (including system prompts and knowledge context)
Output tokensTokens in the LLM's responses
Total tokensSum of input and output tokens
Context tokensTokens consumed by knowledge base assets included in context

Viewing Token Usage

  1. Navigate to Events in the console.
  2. Filter by event type to show chat events.
  3. Click any event to see token breakdowns per message.

Token Usage Aggregations

The Dashboard provides aggregated token views:

  • Daily token consumption: Total tokens used per day.
  • Token consumption by team: Compare usage across teams.
  • Token consumption by model: Identify cost-heavy models.
  • Token trends: Week-over-week and month-over-month comparisons.

Cost Per Conversation

Keeptrusts tracks costs using the wallet and model pricing system.

How Cost Tracking Works

  1. When a chat message is sent, the gateway reserves the estimated cost against the user's effective wallet (user → team → organization cascade).
  2. After the LLM responds, the reservation is settled to the actual cost based on token counts and model pricing.
  3. The cost is attributed to the conversation, user, team, and model.

Viewing Cost Data

Navigate to Spend in the console to see:

  • Cost per conversation: Total spend for each chat session.
  • Cost per user: Aggregate spend by individual users.
  • Cost per team: Team-level budget tracking.
  • Cost per model: Compare pricing across different LLM providers.
  • Daily spend rate: Monitor burn rate against budgets.

Cost Alerts

Configure cost alerts to receive notifications when:

  • A team exceeds its daily or monthly spend threshold.
  • A single conversation exceeds a cost limit.
  • The organization's wallet balance drops below a warning level.

Policy Trigger Rates

Understanding how governance policies interact with chat usage is crucial for policy tuning.

Tracking Policy Interventions

Every policy intervention in chat is recorded:

  • Block rate: Percentage of messages blocked by policies.
  • Escalation rate: Percentage of messages escalated for human review.
  • Redaction rate: Percentage of responses modified by output policies.
  • Disclaimer rate: Percentage of responses with appended disclaimers.

Analyzing Policy Patterns

In the Events page, filter for policy interventions:

  1. Set the event type filter to blocked, escalated, or redacted.
  2. Review the policy that triggered each intervention.
  3. Look for patterns — specific topics, users, or times of day.

Common Analytics Questions

QuestionWhere to Look
Which policy blocks the most messages?Events → filter by "blocked" → group by policy name
Are users self-correcting after blocks?Events → look for successful follow-up messages after blocks
Which team triggers the most escalations?Events → filter by "escalated" → group by team
Is a specific model generating more redactions?Events → filter by "redacted" → group by model

Building Custom Reports

Using the API

For advanced analytics, query the events API directly:

curl -H "Authorization: Bearer $TOKEN" \
"$API_URL/v1/events?type=chat&from=2026-04-01&to=2026-04-23"

The response includes all event metadata including tokens, costs, policies triggered, and citation records.

Using Exports

Create scheduled export jobs to send analytics data to external systems:

  1. Navigate to Exports in the console.
  2. Create a new export job with chat event filters.
  3. Configure the schedule (daily, weekly).
  4. Select the destination (download, S3-compatible storage).

See Chat Export for Compliance & Audit for detailed export configuration.

Dashboard Widgets

The console dashboard includes pre-built widgets for chat analytics:

WidgetShows
Chat VolumeMessage count over selected time range
Active UsersUnique users with chat activity
Token ConsumptionTotal tokens used with input/output breakdown
Spend SummaryTotal cost with per-model breakdown
Policy InterventionsBlock/escalation/redaction counts
Top ModelsMost-used models by message count
Knowledge CitationsCount of knowledge asset references in responses

Best Practices

PracticeWhy It Matters
Review analytics weeklyCatches usage anomalies early
Set cost alerts per teamPrevents budget overruns
Monitor policy trigger trendsIndicates whether policies need tuning
Track knowledge citation ratesValidates that grounding assets are effective
Export data for long-term analysisSupports compliance and trend analysis beyond retention windows
Compare model cost vs. qualityOptimizes model selection decisions

Next steps

For AI systems

  • Canonical terms: chat analytics, usage insights, token tracking, cost per conversation, policy trigger rate, conversation metrics, model distribution, wallet spend.
  • Console pages: Dashboard, Events, Spend. API endpoints: GET /v1/events, GET /v1/wallets/balance.
  • Best next pages: Chat Export for Compliance, Multi-Model Comparison, Team Chat Environments.

For engineers

  • Navigate to Dashboard for high-level metrics; open Events for per-conversation detail; use Spend for cost-focused views.
  • Filter events by event_type: chat and team_id to isolate team-level analytics.
  • Set cost alerts per team wallet to trigger notifications before budget overruns.
  • Export analytics data as JSON for ingestion into external BI tools (Snowflake, BigQuery, Redshift).
  • Use the GET /v1/events API with date range and team filters for programmatic analytics.

For leaders

  • Weekly analytics review catches usage anomalies and cost spikes before they become budget issues.
  • Policy trigger trends indicate whether governance rules are too strict (blocking legitimate work) or too lax (missing violations).
  • Per-team cost allocation enables showback/chargeback for AI spending.
  • Knowledge citation rates validate the ROI of maintaining curated knowledge assets.
  • Model distribution data informs vendor negotiation and contract renewal decisions.