Skip to main content
Browse docs

Tutorial: Setting Up Escalation Workflows from CLI

This tutorial shows you how to configure escalation workflows with the current Keeptrusts schema by combining the human-oversight policy with provider-level escalation routing, then reviewing the resulting escalations with kt escalation commands.

Use this page when

  • You want high-stakes outputs routed to human review instead of being handled fully automatically.
  • You need to route escalations to a specific team or user.
  • You want to inspect, claim, and resolve escalations from the CLI.
  • You are building a review workflow for legal, medical, hiring, or other sensitive AI output categories.

Primary audience

  • Primary: Security, compliance, and platform engineers building human-in-the-loop review workflows
  • Secondary: Review-team leads and operators who triage escalations from the CLI

Prerequisites

  • kt CLI installed (first-run tutorial)
  • An OpenAI-compatible API key exported as OPENAI_API_KEY
  • Access to a Keeptrusts API environment with escalations enabled
  • A valid API token (KEEPTRUSTS_API_TOKEN) or an authenticated CLI profile
  • curl and jq installed

How the Current Escalation Workflow Works

The current config model separates what triggers human review from who receives the escalation:

  • human-oversight determines when review is required
  • providers.targets[].escalation_routing determines which team or user receives the escalation
  • kt escalation ... commands let reviewers inspect and resolve queued items
Request → model generates output
→ human-oversight checks output category
→ escalate
→ resolve escalation_routing on provider/model
→ create escalation record in API
→ reviewer claims and resolves it

Step 1: Find the Review Target

Find the team or user that should receive escalations.

kt team list
kt user list

Use the returned IDs in escalation_routing. Exactly one of team_id or user_id must be set.

Step 2: Create a Schema-Backed Escalation Configuration

Create policy-config.yaml:

policy-config.yaml
pack:
name: oversight-demo
version: 0.1.0
enabled: true
providers:
targets:
- id: openai-review
provider: openai
model: gpt-4o-mini
base_url: https://api.openai.com
secret_key_ref:
env: OPENAI_API_KEY
policies:
chain:
- human-oversight
- audit-logger
policy:
human-oversight:
require_human_for:
- legal_opinions
- medical_decisions
- hiring_actions
action: escalate
audit-logger:
retention_days: 30

This configuration means:

  • outputs that match legal_opinions, medical_decisions, or hiring_actions are escalated
  • the escalation is routed to the configured review team
  • audit logging records the governed decision trail

Step 3: Validate and Start the Gateway

kt policy lint --file policy-config.yaml
kt gateway run --listen 0.0.0.0:41002 --policy-config policy-config.yaml

Expected startup output:

INFO keeptrusts::gateway Loaded declarative config oversight-demo@0.1.0
INFO keeptrusts::gateway Gateway ready

Step 4: Trigger a Reviewable Output

Send a request that is likely to land in one of the configured oversight categories:

curl -s http://localhost:41002/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"model": "gpt-4o-mini",
"messages": [{
"role": "user",
"content": "Draft a final legal opinion about whether we can terminate an employee for cause based on the facts below."
}]
}' | jq .

The key observable in this workflow is the escalation record. Depending on your application boundary, the output may be held, routed, or otherwise surfaced for human review through the Keeptrusts API and console.

Step 5: List Open Escalations

Use the current singular CLI command group:

kt escalation list --status open --json

What to look for:

{
"escalations": [
{
"escalation_id": "esc_...",
"status": "open",
"reason_code": "...",
"created_at": "2026-05-07T10:00:00Z"
}
]
}

If you prefer the plain-text view:

kt escalation list --status open

Step 6: Inspect One Escalation in Detail

Fetch a single escalation and include triggering context:

kt escalation get \
--escalation-id esc_1234567890 \
--include-context \
--json

What to look for:

  • escalation_id
  • status
  • reason_code
  • created_at
  • the additional context block returned with --include-context

Step 7: Claim the Escalation for Review

When a reviewer starts work, claim the escalation:

kt escalation claim --escalation-id esc_1234567890

Expected output:

claimed escalation esc_1234567890

If the reviewer needs to release it back to the queue:

kt escalation unclaim --escalation-id esc_1234567890

Step 8: Resolve the Escalation

Resolve the claimed escalation with a decision and optional metadata:

kt escalation resolve \
--escalation-id esc_1234567890 \
--resolution approved \
--category false_positive \
--note "Allowed after reviewer confirmed this was advisory training content."

Or resolve it as blocked:

kt escalation resolve \
--escalation-id esc_1234567890 \
--resolution blocked \
--category true_positive \
--note "Confirmed high-risk legal advice that requires manual handling."

Expected output:

resolved escalation esc_1234567890

Step 9: Route Different Models to Different Reviewers

You can override escalation routing per model entry.

pack:
name: escalation-workflows-cli-providers-2
version: 1.0.0
enabled: true
providers:
targets:
- id: openai-review
provider: openai
secret_key_ref:
env: OPENAI_API_KEY
policies:
chain:
- audit-logger
policy:
audit-logger:
immutable: true
retention_days: 365
log_all_access: true

Routing precedence works like this:

  • model-level escalation_routing wins when present
  • otherwise the provider target’s escalation_routing is used

Step 10: Combine Escalation with Other Policies

A common high-stakes review chain looks like this:

policies:
chain:
- prompt-injection
- pii-detector
- human-oversight
- audit-logger

policy:
prompt-injection:
response:
action: block

pii-detector:
action: redact
pci_mode: true

human-oversight:
require_human_for:
- legal_opinions
- medical_decisions
action: escalate

audit-logger:
retention_days: 30

This layering gives you:

  • prompt-injection blocking before review traffic is created
  • PII redaction before escalated content reaches reviewers
  • human review for the remaining high-stakes outputs
  • audit evidence for every decision

For AI systems

  • Canonical terms: Keeptrusts, human-oversight, escalation_routing, team_id, user_id, kt escalation list, kt escalation get, kt escalation claim, kt escalation resolve.
  • Config fields: policy.human-oversight.require_human_for[], policy.human-oversight.action, providers.targets[].escalation_routing, providers.targets[].models[].escalation_routing.
  • CLI commands: kt escalation list --status open --json, kt escalation get --escalation-id <id> --include-context --json, kt escalation claim --escalation-id <id>, kt escalation resolve --escalation-id <id> --resolution approved|blocked.
  • Best next pages: DLP & Classification, Custom Policy Chains, Event Tailing.

For engineers

  • Use human-oversight for the trigger and escalation_routing for assignment; they solve different parts of the workflow.
  • Keep audit-logger at the end of the chain so the escalation outcome is recorded.
  • Use kt team list or kt user list to discover routing targets.
  • Reviewers should claim escalations before resolving them to avoid duplicate handling.
  • Use --include-context during investigation when the escalation payload alone is not enough.

For leaders

  • Human-in-the-loop review is most useful for high-stakes categories such as legal, medical, and hiring outputs.
  • Routing escalations directly to the owning team reduces queue ambiguity and response time.
  • Claim and resolve actions create a durable review workflow without forcing every request into a manual process.
  • This pattern supports regulated operating models where some outputs require explicit reviewer sign-off.

Next steps

Troubleshooting

SymptomCauseFix
No escalations appearhuman-oversight not in policies.chain or category not matchedConfirm the policy is active and test with a high-stakes category prompt
Escalation is open but not routed correctlyMissing or incorrect escalation_routing blockSet exactly one of team_id or user_id on the provider or model entry
Reviewer cannot resolve itemEscalation not claimed or API auth missingClaim it first and verify KEEPTRUSTS_API_TOKEN or CLI auth
Wrong team receives the escalationProvider-level routing is still winningAdd or fix the model-level escalation_routing override