Chat Analytics & Usage Insights
The Keeptrusts platform provides comprehensive analytics for chat workbench activity. Every conversation, message, and policy interaction is recorded as a decision event, giving you full visibility into how your teams use AI chat and where governance policies are having an impact.
Use this page when
- You need to monitor conversation volume, token consumption, and cost across your organization.
- You are tracking policy trigger rates to decide whether policies need tuning.
- You want to understand model usage distribution and identify cost optimization opportunities.
- You need to build custom analytics dashboards from exported chat event data.
Primary audience
- Primary: Platform Administrators monitoring chat adoption, Technical Leaders tracking costs
- Secondary: Compliance Officers auditing usage trends, AI Engineers optimizing model selection
Where to Find Chat Analytics
Chat analytics are available in the Keeptrusts management console:
- Navigate to Dashboard for high-level overview metrics.
- Open Events for detailed per-conversation event logs.
- Use Spend for cost-focused analytics and wallet tracking.
All chat analytics respect team and role boundaries — users see metrics for conversations they have access to, while administrators see organization-wide data.
Conversation Metrics
Message Volume
Track the number of messages across your organization:
- Total messages: Count of all prompts and responses.
- Messages per user: Identify active and inactive users.
- Messages per team: Compare team-level adoption.
- Messages over time: Spot trends in daily, weekly, or monthly usage.
Conversation Depth
Understand how users interact with AI:
- Average turns per conversation: How many back-and-forth exchanges occur.
- Conversation duration: Time from first message to last activity.
- Abandoned conversations: Sessions with only one or two messages.
- Resumed conversations: How often users return to previous threads.
Model Distribution
See which models your users prefer:
- Messages per model: Distribution across configured LLM providers.
- Model switching: How often users change models within a conversation.
- Model availability: Uptime and error rates per provider.
Token Usage Tracking
Token consumption directly affects costs and is tracked at multiple levels.
Per-Conversation Token Metrics
Each conversation records:
| Metric | Description |
|---|---|
| Input tokens | Tokens in the user's prompts (including system prompts and knowledge context) |
| Output tokens | Tokens in the LLM's responses |
| Total tokens | Sum of input and output tokens |
| Context tokens | Tokens consumed by knowledge base assets included in context |
Viewing Token Usage
- Navigate to Events in the console.
- Filter by event type to show chat events.
- Click any event to see token breakdowns per message.
Token Usage Aggregations
The Dashboard provides aggregated token views:
- Daily token consumption: Total tokens used per day.
- Token consumption by team: Compare usage across teams.
- Token consumption by model: Identify cost-heavy models.
- Token trends: Week-over-week and month-over-month comparisons.
Cost Per Conversation
Keeptrusts tracks costs using the wallet and model pricing system.
How Cost Tracking Works
- When a chat message is sent, the gateway reserves the estimated cost against the user's effective wallet (user → team → organization cascade).
- After the LLM responds, the reservation is settled to the actual cost based on token counts and model pricing.
- The cost is attributed to the conversation, user, team, and model.
Viewing Cost Data
Navigate to Spend in the console to see:
- Cost per conversation: Total spend for each chat session.
- Cost per user: Aggregate spend by individual users.
- Cost per team: Team-level budget tracking.
- Cost per model: Compare pricing across different LLM providers.
- Daily spend rate: Monitor burn rate against budgets.
Cost Alerts
Configure cost alerts to receive notifications when:
- A team exceeds its daily or monthly spend threshold.
- A single conversation exceeds a cost limit.
- The organization's wallet balance drops below a warning level.
Policy Trigger Rates
Understanding how governance policies interact with chat usage is crucial for policy tuning.
Tracking Policy Interventions
Every policy intervention in chat is recorded:
- Block rate: Percentage of messages blocked by policies.
- Escalation rate: Percentage of messages escalated for human review.
- Redaction rate: Percentage of responses modified by output policies.
- Disclaimer rate: Percentage of responses with appended disclaimers.
Analyzing Policy Patterns
In the Events page, filter for policy interventions:
- Set the event type filter to blocked, escalated, or redacted.
- Review the policy that triggered each intervention.
- Look for patterns — specific topics, users, or times of day.
Common Analytics Questions
| Question | Where to Look |
|---|---|
| Which policy blocks the most messages? | Events → filter by "blocked" → group by policy name |
| Are users self-correcting after blocks? | Events → look for successful follow-up messages after blocks |
| Which team triggers the most escalations? | Events → filter by "escalated" → group by team |
| Is a specific model generating more redactions? | Events → filter by "redacted" → group by model |
Building Custom Reports
Using the API
For advanced analytics, query the events API directly:
curl -H "Authorization: Bearer $TOKEN" \
"$API_URL/v1/events?type=chat&from=2026-04-01&to=2026-04-23"
The response includes all event metadata including tokens, costs, policies triggered, and citation records.
Using Exports
Create scheduled export jobs to send analytics data to external systems:
- Navigate to Exports in the console.
- Create a new export job with chat event filters.
- Configure the schedule (daily, weekly).
- Select the destination (download, S3-compatible storage).
See Chat Export for Compliance & Audit for detailed export configuration.
Dashboard Widgets
The console dashboard includes pre-built widgets for chat analytics:
| Widget | Shows |
|---|---|
| Chat Volume | Message count over selected time range |
| Active Users | Unique users with chat activity |
| Token Consumption | Total tokens used with input/output breakdown |
| Spend Summary | Total cost with per-model breakdown |
| Policy Interventions | Block/escalation/redaction counts |
| Top Models | Most-used models by message count |
| Knowledge Citations | Count of knowledge asset references in responses |
Best Practices
| Practice | Why It Matters |
|---|---|
| Review analytics weekly | Catches usage anomalies early |
| Set cost alerts per team | Prevents budget overruns |
| Monitor policy trigger trends | Indicates whether policies need tuning |
| Track knowledge citation rates | Validates that grounding assets are effective |
| Export data for long-term analysis | Supports compliance and trend analysis beyond retention windows |
| Compare model cost vs. quality | Optimizes model selection decisions |
Next steps
- Export analytics data for compliance in Chat Export for Compliance & Audit.
- Compare model performance and cost in Multi-Model Chat Comparison.
- Configure spending controls in Customizing the Chat Experience.
For AI systems
- Canonical terms: chat analytics, usage insights, token tracking, cost per conversation, policy trigger rate, conversation metrics, model distribution, wallet spend.
- Console pages: Dashboard, Events, Spend. API endpoints:
GET /v1/events,GET /v1/wallets/balance. - Best next pages: Chat Export for Compliance, Multi-Model Comparison, Team Chat Environments.
For engineers
- Navigate to Dashboard for high-level metrics; open Events for per-conversation detail; use Spend for cost-focused views.
- Filter events by
event_type: chatandteam_idto isolate team-level analytics. - Set cost alerts per team wallet to trigger notifications before budget overruns.
- Export analytics data as JSON for ingestion into external BI tools (Snowflake, BigQuery, Redshift).
- Use the
GET /v1/eventsAPI with date range and team filters for programmatic analytics.
For leaders
- Weekly analytics review catches usage anomalies and cost spikes before they become budget issues.
- Policy trigger trends indicate whether governance rules are too strict (blocking legitimate work) or too lax (missing violations).
- Per-team cost allocation enables showback/chargeback for AI spending.
- Knowledge citation rates validate the ROI of maintaining curated knowledge assets.
- Model distribution data informs vendor negotiation and contract renewal decisions.