Skip to main content
Browse docs
By Audience
Getting Started
Configuration
Use Cases
IDE Integration
Third-Party Integrations
Engineering Cache
Console
API Reference
Gateway
Workflow Guides
Templates
Providers and SDKs
Industry Guides
Advanced Guides
Browse by Role
Deployment Guides
In-Depth Guides
Tutorials
FAQ

Master the Overview Dashboard for AI Operations

The Keeptrusts console dashboard is your command center for AI governance. It surfaces real-time operational metrics, cost trends, policy enforcement outcomes, and team-level comparisons — giving you instant visibility into how AI is being used across your organization.

Use this page when

  • You want to understand the console dashboard layout, KPI cards, and chart capabilities.
  • You need to configure anomaly detection, team comparison views, or auto-refresh intervals.
  • You are investigating a traffic spike or cost anomaly and want to drill down from the dashboard.

Primary audience

  • Primary: Technical Leaders and Operators monitoring AI governance posture
  • Secondary: Technical Engineers investigating incidents, Executives reviewing KPIs

What You'll Accomplish

  • Monitor live AI traffic volume, latency, and error rates at a glance
  • Track cost trending and budget consumption across teams and providers
  • Detect anomalies in usage patterns before they become incidents
  • Compare team performance and policy compliance side by side

Dashboard Layout

When you sign in to the console, the Overview Dashboard loads as your default landing page. It is organized into four zones:

ZonePurpose
Top barTime-range selector, refresh interval, team/gateway filter
KPI stripSummary cards for requests, blocked rate, escalations, and spend
Charts areaTime-series graphs for traffic, cost, and policy outcomes
Activity feedRecent escalations, alerts, and configuration changes

Dashboard overview layout

KPI Summary Cards

The KPI strip displays four headline metrics for the selected time range:

  • Total Requests — aggregate LLM calls processed through all gateways
  • Block Rate — percentage of requests blocked by policy enforcement
  • Open Escalations — count of unresolved human-in-the-loop items
  • Total Spend — cumulative cost across all providers and teams

Each card includes a trend indicator comparing the current period to the previous one. A green arrow signals improvement; red signals regression relative to your governance goals.

Customizing KPI Cards

Navigate to Settings → Dashboard Preferences to choose which KPIs appear in the strip. You can replace default cards with:

  • Average latency per provider
  • Token consumption (input vs. output)
  • Policy override count
  • Escalation resolution time (mean)

Real-Time Traffic Charts

The primary chart displays request volume over time, broken down by outcome:

  • Allowed — requests that passed all policy checks
  • Blocked — requests stopped by an input-phase or output-phase policy
  • Escalated — requests routed to the escalation queue for human review
  • Errored — upstream provider failures or gateway errors

Toggle between stacked area, line, and bar views using the chart control menu.

Drill-Down

Click any data point on a chart to drill into the underlying events. The console opens a filtered Events view scoped to the exact time window and outcome category you selected. This is the fastest path from a traffic spike to root-cause investigation.

The cost panel tracks spend across three dimensions:

  1. By provider — compare OpenAI, Anthropic, Azure, and other provider costs
  2. By team — identify which teams are driving the most spend
  3. By model — see cost distribution across model families (GPT-4, Claude, etc.)

Cost data refreshes every five minutes. For real-time spend tracking, use the Usage page.

Budget Overlay

Enable the budget overlay to superimpose allocated budget lines on the cost chart. When a team's spend trajectory is on track to exceed its monthly allocation, the overlay turns amber. If spend has already exceeded the threshold, it turns red.

Anomaly Detection

Keeptrusts applies statistical baselines to your traffic patterns. When a metric deviates significantly from the rolling average, the dashboard highlights it with an anomaly badge.

Common anomalies surfaced:

AnomalyMeaning
Traffic spikeRequest volume exceeds 2× the rolling hourly average
Block surgeBlock rate jumps more than 15 percentage points
Latency outlierP95 latency exceeds the 7-day baseline by 3×
Cost accelerationHourly spend rate doubles compared to the trailing 24-hour mean

Click an anomaly badge to view the contributing events and decide whether to adjust policies, scale gateways, or investigate further.

Team Comparison Views

Switch to the Teams tab on the dashboard to compare governance posture across teams:

  • Requests per team (volume and trend)
  • Block rate per team (policy strictness indicator)
  • Escalation rate per team
  • Spend per team vs. allocated budget

Use this view during governance reviews to identify teams that may need policy adjustments or additional training. Export the comparison as CSV for offline analysis or executive reporting.

Filtering and Time Ranges

The top bar provides:

  • Time range — preset options (last hour, 24 h, 7 d, 30 d) or custom date range
  • Gateway filter — scope the dashboard to one or more gateways
  • Team filter — narrow metrics to a specific team
  • Provider filter — isolate traffic to a single upstream provider

Filters persist across page navigation within the same session.

Auto-Refresh

Set the refresh interval from the top bar dropdown. Available intervals: 30 seconds, 1 minute, 5 minutes, or manual. For live monitoring during incident response, use the 30-second interval.

Configuration Example

To set default dashboard preferences for your organization, an admin can configure the following in Settings → Dashboard Preferences:

dashboard:
default_time_range: "24h"
refresh_interval_seconds: 60
kpi_cards:
- total_requests
- block_rate
- open_escalations
- total_spend
anomaly_detection: true
anomaly_sensitivity: "medium" # low | medium | high
team_comparison: true

Business Outcomes

OutcomeHow the Dashboard Delivers It
Faster incident responseAnomaly badges and drill-down cut mean-time-to-detect from hours to minutes
Budget accountabilityCost trending with budget overlay prevents unplanned overruns
Governance visibilityTeam comparison views give executives a single pane for AI compliance posture
Operational confidenceReal-time traffic and error monitoring ensures gateways are healthy before scaling

Next steps

For AI systems

  • Canonical terms: Overview Dashboard, KPI strip, anomaly detection, cost trending, budget overlay, team comparison, drill-down, auto-refresh.
  • KPI cards: Total Requests, Block Rate, Open Escalations, Total Spend (customizable in Settings → Dashboard Preferences).
  • Chart outcomes: Allowed, Blocked, Escalated, Errored; views: stacked area, line, bar.
  • Anomaly types: traffic spike, block surge, latency outlier, cost acceleration.
  • Console navigation: Dashboard (default landing), Settings → Dashboard Preferences, Teams tab.
  • Best next pages: Escalation Management, Gateway Monitoring.

For engineers

  • Dashboard loads as default landing page on sign-in; scope with time-range, gateway, team, and provider filters.
  • Click any chart data point to drill into filtered Events view for root-cause investigation.
  • Configure anomaly sensitivity (low/medium/high) in Settings → Dashboard Preferences.
  • Enable budget overlay to see spend trajectory against allocated budget lines.
  • Use 30-second auto-refresh during incident response; 5-minute for normal operations.

For leaders

  • The dashboard provides a single pane of glass for AI governance posture across all teams and gateways.
  • Anomaly detection surfaces deviations from baseline before they become incidents — reducing mean-time-to-detect.
  • Team comparison views enable governance reviews: identify teams that need policy adjustments or training.
  • Cost trending with budget overlay prevents unplanned overruns and supports quarterly budget discussions.