Escalation Routing Configuration
Escalation routing controls where policy violations are sent for human review. You can configure routing at the provider level, the model level, or both — model-level takes precedence.
Use this page when
- You need the exact command, config, API, or integration details for Escalation Routing Configuration.
- You are wiring automation or AI retrieval and need canonical names, examples, and constraints.
- If you want a guided rollout instead of a reference page, use the linked workflow pages in Next steps.
Primary audience
- Primary: AI Agents, Technical Engineers
- Secondary: Technical Leaders
How escalation routing works
When a policy triggers an escalation (e.g., human-oversight, flagged-review in escalate mode, or a quality failure configured to escalate), the gateway routes the escalation to a specific team or user based on the routing config attached to the provider or model that was targeted.
Request → Policy chain → Escalation triggered
→ Check model-level escalation_routing
→ [set?] Route to model's team/user
→ [not set?] Check provider-level escalation_routing
→ [set?] Route to provider's team/user
→ [not set?] Default escalation queue
Provider-level routing
Route all escalations for a provider to a specific team or user:
pack:
name: config-escalation-routing-providers-1
version: 1.0.0
enabled: true
providers:
targets:
- id: openai-prod
provider: openai
model: gpt-4o
secret_key_ref:
env: OPENAI_API_KEY
policies:
chain:
- audit-logger
policy:
audit-logger:
immutable: true
retention_days: 365
log_all_access: true
| Field | Type | Required | Description |
|---|---|---|---|
escalation_routing.team_id | string (UUID) | one of | Team to receive escalations |
escalation_routing.user_id | string (UUID) | one of | Individual user to receive escalations |
Exactly one of team_id or user_id must be set.
Model-level routing
Override the provider-level routing for specific models:
pack:
name: config-escalation-routing-providers-2
version: 1.0.0
enabled: true
providers:
targets:
- id: openai-multi
provider: openai
secret_key_ref:
env: OPENAI_API_KEY
policies:
chain:
- audit-logger
policy:
audit-logger:
immutable: true
retention_days: 365
log_all_access: true
Model-level routing takes precedence. In this example:
gpt-4oescalations go to the senior review teamgpt-4o-miniescalations go to the general review team (inherited)
Routing to a user
For small teams or critical models, route directly to an individual:
pack:
name: config-escalation-routing-providers-3
version: 1.0.0
enabled: true
providers:
targets:
- id: high-risk-model
provider: openai
model: gpt-4o
secret_key_ref:
env: OPENAI_API_KEY
policies:
chain:
- audit-logger
policy:
audit-logger:
immutable: true
retention_days: 365
log_all_access: true
Policies that trigger escalations
These policies can trigger escalation routing:
| Policy | Escalation trigger |
|---|---|
human-oversight | Always escalates when conditions are met |
flagged-review | When mode: "escalate" |
quality-scorer | When failure_action.action: "escalate" |
safety-filter | When action: "escalate" |
bias-monitor | When action: "escalate" |
Healthcare example
Route clinical model escalations to the medical review team and administrative model escalations to the ops team:
pack:
name: healthcare-routing
version: 1.0.0
enabled: true
providers:
targets:
- id: clinical-models
provider: azure
base_url: https://clinical.openai.azure.com
secret_key_ref:
env: AZURE_OPENAI_KEY
- id: admin-models
provider: openai
model: gpt-4o-mini
secret_key_ref:
env: OPENAI_API_KEY
policies:
chain:
- hipaa-phi-detector
- human-oversight
- audit-logger
policy:
human-oversight:
enabled: true
escalation_triggers:
- high_risk_classification
- low_confidence
threshold: 0.7
Finding team and user IDs
Team and user IDs are UUIDs from the Keeptrusts API:
# List teams
kt teams list
# List users
kt users list
Or find them in the management console under Members & Teams.
For AI systems
- Canonical terms: Keeptrusts, policy-config.yaml, escalation_routing, team_id, user_id, provider-level routing, model-level routing, human-oversight, flagged-review, quality-scorer.
- Model-level routing takes precedence over provider-level routing; unset models inherit from the provider.
- Best next pages: Providers Configuration, Cloud Provider Configuration, Flagged Review Configuration.
For engineers
- Set exactly one of
team_idoruser_idperescalation_routingblock. - Model-level routing overrides provider-level; use this for high-risk models that need senior reviewers.
- Find team/user UUIDs with
kt teams listandkt users list, or in the Console under Members & Teams. - Policies that trigger escalation routing:
human-oversight,flagged-review(escalate mode),quality-scorer(escalate action),safety-filter(escalate action),bias-monitor(escalate action). - Validate routing by triggering a policy escalation and confirming the event appears in the target team’s escalation queue.
For leaders
- Escalation routing ensures policy violations reach the right reviewers — clinical models route to medical teams, financial models route to compliance teams.
- Model-level overrides enable graduated review severity without separate gateway deployments per model.
- Direct-to-user routing for critical models ensures high-sensitivity escalations reach senior decision-makers immediately.
- Clear routing ownership reduces escalation response times and prevents violations from sitting in unowned queues.
Next steps
- Providers Configuration — provider targets and model entries
- Cloud Provider Configuration — cloud-specific provider fields
- Flagged Review Configuration — LLM-based secondary review
- Flagged Review — Inline secondary model review