Skip to main content
Browse docs
By Audience
Getting Started
Configuration
Use Cases
IDE Integration
Third-Party Integrations
Engineering Cache
Console
API Reference
Gateway
Workflow Guides
Templates
Providers and SDKs
Industry Guides
Advanced Guides
Browse by Role
Deployment Guides
In-Depth Guides
Tutorials
FAQ

AI Model Risk Management (SR 11-7)

The Federal Reserve's SR 11-7 and OCC 2011-12 guidance requires financial institutions to maintain rigorous model risk management (MRM) frameworks. As AI models enter trading, credit, and risk workflows, they fall squarely within MRM scope — requiring inventory tracking, independent validation, ongoing monitoring, and governance documentation.

Use this page when

  • Your institution must maintain an AI model inventory and validation framework under SR 11-7 or OCC 2011-12.
  • You need tiered governance controls (Tier 1-3) for AI models based on risk classification.
  • Model risk management requires independent validation workflows with escalation-based approval gates.
  • You want ongoing performance monitoring and degradation alerting for AI models in production.

Keeptrusts provides the infrastructure to enforce MRM controls at the AI gateway layer, ensuring every model interaction is logged, validated, and auditable.

Primary audience

  • Primary: Technical Leaders
  • Secondary: Technical Engineers, AI Agents

SR 11-7 Requirements Mapped to Keeptrusts

SR 11-7 RequirementKeeptrusts Capability
Model inventory and documentationConsole model registry + metadata
Independent validationEscalation workflows for model approval
Ongoing monitoringEvent logging + performance alerts
Outcome analysisEvent export for backtesting comparison
Governance and controlsPolicy enforcement at gateway
Audit trailImmutable decision event log

Model Inventory via Console

Register all AI models used across trading and risk systems in the Keeptrusts console. The Models page in the console tracks:

  • Model provider and version
  • Intended use case and limitations
  • Approval status and validation date
  • Risk tier classification (Tier 1-3)

Use the API to programmatically manage your model inventory:

# List all registered models
curl -s -H "Authorization: Bearer $API_TOKEN" \
https://keeptrusts-api.internal:8080/v1/models | python3 -m json.tool

# Model details including risk tier
curl -s -H "Authorization: Bearer $API_TOKEN" \
https://keeptrusts-api.internal:8080/v1/models/{model_id}

Validation Governance Policies

Enforce that only validated models are accessible through the gateway:

# policy-config.yaml
version: "1"
policies:
- name: restrict-unapproved-models
description: Block access to models not approved by MRM team
enforcement: block
rules:
- type: model_allowlist
action: block
allowed_models:
- "gpt-4"
- "gpt-4-turbo"
- "claude-3-5-sonnet"
message: "Blocked: Model not approved by Model Risk Management. Contact MRM team."

- name: model-usage-logging
description: Log all model interactions for MRM audit
enforcement: log
rules:
- type: log_all
action: log
metadata:
compliance_framework: "SR 11-7"
log_category: "model_usage"

Performance Degradation Monitoring

Track model performance over time using event data. Export events and compare against expected baselines:

import json
import subprocess
from datetime import datetime, timedelta

def export_model_events(model: str, days: int = 30) -> list[dict]:
"""Export recent model usage events for performance analysis."""
result = subprocess.run(
[
"kt", "events", "list",
"--filter", f"model={model}",
"--since", f"{days}d",
"--format", "json",
],
capture_output=True,
text=True,
)
return json.loads(result.stdout)

def check_degradation(events: list[dict], threshold_ms: float = 5000) -> list[dict]:
"""Flag events where latency exceeds the performance threshold."""
degraded = []
for event in events:
latency = event.get("duration_ms", 0)
if latency > threshold_ms:
degraded.append({
"event_id": event["id"],
"timestamp": event["created_at"],
"latency_ms": latency,
"model": event.get("model", "unknown"),
})
return degraded

events = export_model_events("gpt-4", days=7)
degraded = check_degradation(events, threshold_ms=5000)

if degraded:
print(f"ALERT: {len(degraded)} degraded responses in last 7 days")
for d in degraded:
print(f" {d['timestamp']}: {d['latency_ms']}ms ({d['event_id']})")

Model Change Control

Use escalation policies to enforce human review when models are changed or updated:

- name: escalate-model-version-change
description: Escalate when a model version changes unexpectedly
enforcement: escalate
rules:
- type: regex
action: escalate
patterns:
- "(?i)model.*version.*change"
- "(?i)updated.*model"
- "(?i)new.*deployment"
message: "Escalation: Model version change detected. MRM review required."

MRM Documentation Generation

Generate model risk documentation from event data for regulatory examinations:

# Export model usage summary for MRM annual review
kt events list \
--since 365d \
--format csv \
--output mrm-annual-review.csv

# Export escalation history for model approval audit
kt events list \
--filter "decision=escalate" \
--since 365d \
--format json > escalation-history.json

Tier Classification Framework

Align your model risk tiers with AI governance policy strictness:

Risk TierDescriptionGateway Policy
Tier 1 — CriticalModels in live trading, pricing, credit decisionsBlock + escalate + full logging
Tier 2 — SignificantModels in risk reporting, client communicationEscalate + full logging
Tier 3 — LowResearch tools, internal summarizationLogging only

Configure per-tier gateway instances:

# Tier 1 — strictest controls
kt gateway run --policy-config policies/tier1-critical.yaml --port 41010

# Tier 2 — moderate controls
kt gateway run --policy-config policies/tier2-significant.yaml --port 41020

# Tier 3 — logging only
kt gateway run --policy-config policies/tier3-low.yaml --port 41030

Regulatory References

  • SR 11-7 (Federal Reserve) — Supervisory Guidance on Model Risk Management
  • OCC 2011-12 — Sound Practices for Model Risk Management
  • Basel Committee BCBS 239 — Principles for effective risk data aggregation
  • EU AI Act Article 9 — Risk management system requirements for high-risk AI

Next steps

For AI systems

  • Canonical terms: Keeptrusts gateway, SR 11-7 model risk management, model inventory, model validation, risk tier classification, performance degradation alerting, OCC 2011-12.
  • Key config/commands: declarative gateway config for approved-model inventory; kt gateway run --policy-config policies/tier1-critical.yaml --port 41010 (per-tier gateway instances); API model endpoints (GET /v1/models); event export for outcome analysis; Usage for observed model traffic.
  • Best next pages: Backtesting AI with Governance Controls, Risk Model Validation, Governing AI in Trading Systems.

For engineers

  • Prerequisites: API running with model registry configured; per-tier gateway instances on ports 41010 (Tier 1), 41020 (Tier 2), 41030 (Tier 3).
  • Register approved AI models in declarative gateway configuration, then review observed model traffic and costs in Usage.
  • Validate with: curl -s -H "Authorization: Bearer $API_TOKEN" https://keeptrusts-api.internal:8080/v1/models to verify inventory; check escalation workflows fire for unapproved model usage.
  • Deploy Tier 1 (critical) with strictest controls; Tier 3 (low risk) with logging-only policies.

For leaders

  • Directly addresses SR 11-7 and OCC 2011-12 requirements — the primary US regulatory framework for model risk.
  • Tiered governance reduces overhead: critical models get strict validation gates; low-risk models get logging without blocking productivity.
  • Declarative model inventory plus Usage telemetry provides examination-ready documentation of approved models and observed production use.
  • EU AI Act Article 9 also requires risk management systems for high-risk AI — this framework satisfies both US and EU regulator expectations.