Skip to main content
Browse docs
By Audience
Getting Started
Configuration
Use Cases
IDE Integration
Third-Party Integrations
Engineering Cache
Console
API Reference
Gateway
Workflow Guides
Templates
Providers and SDKs
Industry Guides
Advanced Guides
Browse by Role
Deployment Guides
In-Depth Guides
Tutorials
FAQ

Monitor AI Traffic in Real-Time with kt events

When a policy change goes live or an incident unfolds, you need immediate visibility into what the gateway is doing. The kt events tail command streams decision events in real time — every pass, block, redaction, and escalation as it happens.

Use this page when

  • You need real-time visibility into gateway decisions as they happen (pass, block, redact, escalate).
  • You are validating a policy change immediately after deployment.
  • You want to pipe live events to alerting systems, SIEM, or terminal dashboards.

Primary audience

  • Primary: Technical Engineers and SREs monitoring live traffic
  • Secondary: Security analysts investigating incidents, On-call operators

Streaming events with kt events tail

# Stream all events from the default gateway
kt events tail

# Stream events from a specific gateway
kt events tail --gateway gw-prod-01

# Stream with full detail (includes policy chain trace)
kt events tail --format detailed

Default output

2025-04-23T14:32:01Z PASS gw-prod-01 openai/gpt-4o user:alice 3 policies evaluated 142ms
2025-04-23T14:32:03Z BLOCK gw-prod-01 openai/gpt-4o user:bob prompt_injection (0.92) 8ms
2025-04-23T14:32:05Z REDACT gw-prod-01 anthropic/claude-3 user:carol pii_redaction (2 entities) 156ms
2025-04-23T14:32:07Z PASS gw-prod-02 openai/gpt-4o-mini user:dave 3 policies evaluated 98ms

Detailed output

The --format detailed flag shows the full policy chain evaluation trace for each event:

─── Event e-8f3a2b1c ───────────────────────────────
Timestamp: 2025-04-23T14:32:03Z
Gateway: gw-prod-01
Provider: openai/gpt-4o
User: bob
Outcome: BLOCKED
Latency: 8ms

Input Chain:
✗ prompt-injection-guard → BLOCK (score: 0.92, threshold: 0.85)
- pii-input-redaction → skipped (chain short-circuited)
- topic-restriction → skipped (chain short-circuited)

Output Chain: not reached

Message: "Prompt injection detected — request blocked."
────────────────────────────────────────────────────

Filtering events

Focus on exactly what you need with filters:

# Only blocked requests
kt events tail --filter "outcome=blocked"

# Only a specific provider
kt events tail --filter "provider=openai/gpt-4o"

# Only a specific user
kt events tail --filter "user=alice"

# Combine filters (AND logic)
kt events tail --filter "outcome=blocked" --filter "gateway=gw-prod-01"

# Filter by policy name
kt events tail --filter "policy=prompt-injection-guard"

# Filter by latency threshold (slow requests)
kt events tail --filter "latency_ms>500"

Filter reference

FilterOperatorsExample
outcome=outcome=blocked, outcome=passed, outcome=redacted
gateway=gateway=gw-prod-01
provider=provider=openai/gpt-4o
user=user=alice
policy=policy=pii-redaction
latency_ms>, <, =latency_ms>500
team=team=engineering
model=model=gpt-4o

Live dashboards in the terminal

Combine kt events tail with standard Unix tools for live dashboards:

# Count blocked vs passed events (rolling)
kt events tail --format csv | awk -F, '{count[$2]++} END {for (k in count) print k, count[k]}'

# Live block rate (refreshes every 10 seconds)
watch -n 10 "kt events tail --since 10m --format csv | \
awk -F, '{total++; if(\$2==\"blocked\") blocked++} END {printf \"Block rate: %.1f%% (%d/%d)\\n\", blocked/total*100, blocked, total}'"

# Top triggered policies in the last hour
kt events tail --since 1h --format json | \
jq -r '.triggered_policy // empty' | sort | uniq -c | sort -rn | head -10

Pattern detection

Detect anomalous patterns by comparing current traffic against baselines:

# Spike detection: block rate in the last 5 minutes vs last hour
kt events tail --since 5m --format json | \
jq '[.[] | select(.outcome=="blocked")] | length' > /tmp/recent_blocks

kt events tail --since 1h --format json | \
jq '[.[] | select(.outcome=="blocked")] | length' > /tmp/hourly_blocks

echo "Recent: $(cat /tmp/recent_blocks) blocks in 5min"
echo "Hourly: $(cat /tmp/hourly_blocks) blocks in 60min"

# Unusual provider errors
kt events tail --filter "outcome=error" --format detailed

Alerting on policy violations

Pipe events to alerting systems for automated incident response:

# Send blocks to a Slack webhook
kt events tail --filter "outcome=blocked" --format json | \
while read -r event; do
curl -s -X POST "$SLACK_WEBHOOK_URL" \
-H 'Content-Type: application/json' \
-d "{\"text\": \"🚨 Blocked request: $(echo "$event" | jq -r '.user') triggered $(echo "$event" | jq -r '.triggered_policy')\"}"
done

# Log high-severity events to a file for SIEM ingestion
kt events tail --filter "outcome=blocked" --format json >> /var/log/keeptrusts/blocked-events.jsonl

# Alert when block rate exceeds threshold
kt events tail --since 5m --format json | \
jq '[.[] | select(.outcome=="blocked")] | length' | \
xargs -I{} bash -c '[ {} -gt 50 ] && echo "ALERT: {} blocks in 5 minutes" | mail -s "Keeptrusts Alert" ops@company.com'

Output formats

FormatFlagUse case
Table (default)--format tableHuman-readable terminal output
Detailed--format detailedDebugging, full chain traces
JSON--format jsonProgrammatic consumption, piping to jq
CSV--format csvSpreadsheet analysis, metric aggregation
JSONL--format jsonlLog aggregation, SIEM ingestion

Combining with historical queries

kt events tail shows live traffic. For historical analysis, use kt events list:

# Query events from the last 24 hours
kt events list --since 24h --limit 1000

# Export events for a compliance review
kt events list --since 7d --filter "outcome=blocked" --format json > weekly-blocks.json

# Count events by outcome for the last month
kt events list --since 30d --format csv | \
awk -F, '{count[$2]++} END {for (k in count) print k": "count[k]}'

Business outcomes

OutcomeHow live monitoring helps
Instant incident detectionSee blocks and errors the moment they occur — no waiting for batch reports
Faster root-cause analysisDetailed chain traces show exactly which policy triggered and why
Policy change validationTail events immediately after a deployment to confirm expected behavior
Compliance evidenceStream events to SIEM or log aggregation for continuous audit trails
Capacity planningMonitor latency trends and throughput to plan gateway scaling

For AI systems

  • Canonical terms: kt events tail, kt events list, decision event, outcome (passed/blocked/redacted/escalated), policy chain trace.
  • Filters: --filter "outcome=blocked", --filter "provider=openai/gpt-4o", --filter "user=alice", --filter "policy=<name>", --filter "latency_ms>500".
  • Output formats: --format table|detailed|json|csv|jsonl.
  • Historical complement: kt events list --since <duration> --limit <n>.
  • Best next pages: Export Workflows, Gateway Diagnostics, Policy Chains.

For engineers

  • Quick start: kt events tail streams from the default gateway; add --gateway gw-prod-01 to target a specific instance.
  • After a deploy: kt events tail --format detailed to confirm the new policy chain triggers as expected.
  • Alerting: pipe --format json to jq + webhook curl for real-time Slack/PagerDuty alerts on blocks.
  • SIEM ingestion: kt events tail --filter "outcome=blocked" --format jsonl >> /var/log/keeptrusts/blocked.jsonl.
  • Combine with kt events list --since 24h for historical analysis beyond the live stream window.

For leaders

  • Real-time monitoring proves policy enforcement is working the moment a change is deployed — no waiting for batch reports.
  • Live event streaming to SIEM satisfies continuous-monitoring requirements for SOC 2 CC7.1 and similar controls.
  • Pattern detection (spike in blocks, unusual providers) enables early incident detection before business impact.
  • Capacity planning: latency trends inform scaling decisions before users experience degradation.

Next steps