Skip to main content
Browse docs
By Audience
Getting Started
Configuration
Use Cases
IDE Integration
Third-Party Integrations
Engineering Cache
Console
API Reference
Gateway
Workflow Guides
Templates
Providers and SDKs
Industry Guides
Advanced Guides
Browse by Role
Deployment Guides
In-Depth Guides
Tutorials
FAQ

Monitoring and Auditing IDE AI Usage

When IDE AI assistants route through the Keeptrusts gateway, every request generates an event. You can use these events to monitor usage in real time, track costs per developer, audit policy compliance, and generate reports for stakeholders.

Use this page when

  • You are working through Monitoring and Auditing IDE AI Usage as an implementation or operating workflow in Keeptrusts.
  • You need the practical steps, expected outcomes, and related validation guidance in one place.
  • If you need exact field-by-field reference instead of a workflow page, use the linked reference pages in Next steps.

Primary audience

  • Primary: Technical Engineers
  • Secondary: AI Agents, Technical Leaders

Real-Time Monitoring with kt events tail

The fastest way to see IDE traffic is the live event tail:

kt events tail

Output shows each request as it flows through the gateway:

2024-01-15T10:32:15Z ALLOW gpt-4o-mini user=alice tokens=180 cost=$0.003 latency=0.8s
2024-01-15T10:32:18Z ALLOW gpt-4o user=bob tokens=420 cost=$0.021 latency=1.2s
2024-01-15T10:32:20Z BLOCK gpt-4o-mini user=alice policy=redact-secrets reason="API key detected"
2024-01-15T10:32:22Z ALLOW gpt-4o-mini user=carol tokens=95 cost=$0.001 latency=0.1s cache=hit

Filtering Events

Narrow the output to specific criteria:

# Only events from a specific user
kt events tail --filter user=alice

# Only blocked events
kt events tail --filter decision=block

# Only cache hits
kt events tail --filter cache_hit=true

# Only a specific model
kt events tail --filter model=gpt-4o

Console Dashboard

The Keeptrusts console provides a visual dashboard for event analytics. Navigate to Events in the sidebar to see:

  • Event timeline — requests over time, colored by decision (allow, block, redact)
  • Model distribution — which models your team uses most
  • Policy triggers — which policies fire most frequently
  • Top users — which developers generate the most traffic

Filtering in the Dashboard

Use the dashboard filters to focus on IDE traffic:

  1. Open Events in the console sidebar.
  2. Use the Source filter to select IDE-originated events.
  3. Use the Date range picker to narrow to a specific period.
  4. Click any event to see the full details, including the policy evaluation chain.

Cost Attribution

Every event includes cost data (tokens used and estimated cost). Track spending at multiple levels:

Per-Developer Cost

View each developer's AI spending in the console under Cost Center:

DeveloperModelRequestsTokensCost
alicegpt-4o-mini1,240186,000$2.79
bobgpt-4o380228,000$11.40
carolclaude-sonnet-4-20250514520312,000$9.36

Per-Team Cost

If developers are assigned to teams in the console, costs aggregate by team:

# View team-level spending
kt events export --group-by team --format csv --since 7d

Per-Model Cost

Compare costs across models to optimize your selection:

ModelRequestsAvg TokensAvg CostCache Hit Rate
gpt-4o-mini3,200150$0.00242%
gpt-4o890600$0.03018%
claude-sonnet-4-20250514520600$0.01822%

Engineering Cache Analytics

The engineering cache is especially effective for IDE traffic because developers often trigger similar completions. Monitor cache performance:

  • Hit rate — percentage of requests served from cache (target: 30-50% for IDE traffic)
  • Cost savings — dollars saved by serving cached responses
  • Latency improvement — cached responses are near-instant vs. 0.5-2 seconds for LLM calls

View cache metrics in the console under Engineering Cache or via:

kt events tail --filter cache_hit=true

Policy Compliance Auditing

Track how often policies trigger across IDE traffic:

Redaction Events

Monitor how frequently secrets or PII are caught:

kt events tail --filter decision=redact

Each redaction event records:

  • Which pattern matched
  • What was redacted (pattern name, not the actual secret)
  • Which file context triggered the match

Block Events

Review blocked requests to identify false positives or genuine policy violations:

kt events tail --filter decision=block

If a policy blocks too many legitimate requests, adjust the policy regex or threshold.

Webhook Notifications

Set up webhooks to alert on specific events:

  1. Open Settings → Webhooks in the console.
  2. Create a webhook for event.blocked or event.redacted triggers.
  3. Point it to your Slack, PagerDuty, or custom endpoint.

Example: Get a Slack notification whenever a secret is detected in IDE traffic:

{
"event": "event.redacted",
"filter": {
"policy": "redact-secrets-in-code"
},
"destination": "https://hooks.slack.com/services/YOUR/WEBHOOK/URL"
}

Exporting Events for Compliance

Generate compliance reports by exporting events:

# Export last 30 days as CSV
kt events export --format csv --since 30d --output ide-audit-report.csv

# Export as JSON for programmatic analysis
kt events export --format json --since 30d --output ide-audit-report.json

The console also supports scheduled exports under Settings → Exports.

What to Include in Compliance Reports

Data PointWhy It Matters
Total requestsVolume of AI usage
Redaction countSecrets and PII caught before reaching providers
Block countPolicy violations detected
Models usedWhich providers hold your data
Cost by teamBudget allocation and accountability
Cache hit rateEfficiency and cost optimization

Setting Up Alerts

Configure alerts for anomalous usage patterns:

  • Spending spike — alert when a developer or team exceeds their daily budget
  • High block rate — alert when block rate exceeds a threshold (may indicate misconfigured policies or a compromised key)
  • New model usage — alert when a previously unused model appears in traffic

For AI systems

  • Canonical terms: Keeptrusts, Monitoring and Auditing IDE AI Usage, ide-integration.
  • Exact feature, config, command, or page names: Monitoring and Auditing IDE AI Usage.
  • Use the linked audience and reference pages in Next steps when you need deeper source material.

For engineers

  • Use the commands, configuration examples, API payloads, or UI steps in this page as the working baseline for Monitoring and Auditing IDE AI Usage.
  • Validate the result with the expected outcomes, troubleshooting notes, or linked workflow pages in this page and Next steps.

For leaders

  • This page matters when planning rollout, governance, support ownership, or operating decisions for Monitoring and Auditing IDE AI Usage.
  • Use the linked audience, architecture, and workflow pages in Next steps to connect this detail to broader implementation choices.

Next steps