Skip to main content
Browse docs
By Audience
Getting Started
Configuration
Use Cases
IDE Integration
Third-Party Integrations
Engineering Cache
Console
API Reference
Gateway
Workflow Guides
Templates
Providers and SDKs
Industry Guides
Advanced Guides
Browse by Role
Deployment Guides
In-Depth Guides
Tutorials
FAQ

Product Manager Guide: Shipping AI Features Safely

Shipping AI-powered features requires balancing speed to market with safety, compliance, and user trust. Keeptrusts provides the governance infrastructure that lets you launch AI features confidently — with built-in risk controls, compliance evidence, and the ability to iterate quickly when issues arise.

Use this page when

  • You are launching an AI-powered feature and need a governance-aware launch checklist
  • You need to classify the risk level of an AI feature (Low/Medium/High/Critical)
  • You want to run a controlled rollout using traffic splitting through the gateway
  • You are getting compliance sign-off before shipping an AI feature to production
  • You need to define rollback plans for AI features that behave unexpectedly

Primary audience

  • Primary: Technical Leaders (Product Managers, Product Owners)
  • Secondary: Engineering Managers, Compliance Officers, UX Designers

The AI Feature Governance Challenge

AI features differ from traditional software in critical ways:

Traditional featureAI-powered feature
Deterministic outputsProbabilistic outputs — different every time
Static attack surfaceDynamic attack surface — prompt injection, jailbreaks
Data stays internalData may be sent to external providers
Testing covers known casesEdge cases are unbounded
Rollback is cleanModel behavior may change without code changes

Keeptrusts addresses each of these differences with policy enforcement at the gateway layer.

AI Feature Launch Checklist

Use this checklist before launching any AI-powered feature:

Pre-Development

  • Define the AI use case — What does the AI component do? What decisions does it influence?
  • Classify risk level — Low (content generation), Medium (recommendations), High (decisions affecting users)
  • Identify data flows — What data goes to the LLM? What comes back? Where is it stored?
  • Select appropriate policy template — Work with your governance team to choose the right policy tier

Development

  • Route through Keeptrusts gateway — All AI requests must go through the governed pathway
  • Implement error handling — Handle blocked requests gracefully in the UI
  • Add disclaimers — Configure disclaimer policies for AI-generated content
  • Test with governance policies active — Don't develop against raw API access

Pre-Launch

  • Validate policy configurationkt policy lint --file feature-policy.yaml
  • Run security review — Confirm PII protection, injection defense, and data handling
  • Get compliance sign-off — Export evidence showing policy enforcement in testing
  • Set cost budgets — Configure cost limits appropriate for expected usage
  • Prepare rollback plan — Document how to disable the AI component quickly

Launch

  • Deploy with monitoring — Use the Console Dashboard to monitor in real-time
  • Start with limited rollout — Use traffic splitting to reach a subset of users first
  • Monitor escalation queue — Watch for unexpected policy triggers

Post-Launch

  • Review first-week metrics — Usage, costs, blocks, escalations
  • Collect user feedback — Correlate satisfaction with governance events
  • Tune policies — Adjust thresholds based on real-world data

Risk Assessment Framework

Risk Classification

Risk levelCriteriaGovernance requirement
LowAI generates content, no user decisionsLogging, basic content filtering
MediumAI influences recommendations or workflowsPII protection, cost caps, escalation on edge cases
HighAI makes or directly influences decisionsHuman review escalation, full audit trail, compliance sign-off
CriticalAI in regulated domains (finance, health, legal)All of the above + regulatory-specific policies

Conducting a Risk Assessment

For each AI feature, document:

  1. Data sensitivity — What categories of data reach the LLM?
  2. User impact — What happens if the AI produces incorrect output?
  3. Reversibility — Can the user easily undo or override the AI?
  4. Volume — How many users and requests per day?
  5. Regulatory scope — Does it fall under specific regulations?

Controlled Feature Rollout

Progressive Deployment with Traffic Splitting

Use the gateway's traffic splitting to control AI feature exposure:

policies:
- name: feature-rollout
type: traffic_split
description: "Gradual AI feature rollout"
variants:
- model: gpt-4o
weight: 10
tag: ai-feature-v2
- model: gpt-4o
weight: 90
tag: ai-feature-v1
enabled: true

Rollout Schedule

WeekNew versionPrevious versionGate criteria
15%95%Zero critical escalations
225%75%Error rate below 1%, user satisfaction stable
350%50%Cost within budget, no compliance issues
4100%0%All metrics green

Kill Switch

If issues emerge post-launch, instantly disable the AI component:

# Validate emergency config
kt policy lint --file feature-disabled.yaml

# Apply the kill switch configuration
# (redeploy gateway with the feature-disabled config)

Compliance Sign-Off Process

Generating Evidence for Sign-Off

Before launch, generate a compliance evidence package:

# Export testing-phase events showing governance in action
kt export create \
--type events \
--format csv \
--since 14d \
--description "Feature X pre-launch compliance evidence"

Sign-Off Checklist

CheckOwnerEvidence source
PII protection verifiedSecurity teamBlock events in test data
Cost limits configuredFinance / PMConsole Usage screenshot
Disclaimer policy activeLegal / ComplianceEvent metadata showing disclaimers
Escalation workflow testedGovernance teamResolved test escalations
Audit trail completeComplianceExport of test-phase events
Rollback plan documentedEngineeringRunbook document

Measuring AI Feature Success

Usage Metrics

Track these in the Console Dashboard and Events API:

# Feature adoption: unique users per day
curl -H "Authorization: Bearer $API_TOKEN" \
"https://api.keeptrusts.com/v1/events?since=7d&format=json" | \
jq 'group_by(.timestamp[:10]) | map({
date: .[0].timestamp[:10],
unique_users: ([.[].user] | unique | length),
total_requests: length
})'
MetricWhat it tells youSource
Daily active AI usersAdoption breadthEvents grouped by user per day
Requests per userEngagement depthEvents per user
Block ratePolicy frictionBlocked / total events
Escalation rateEdge case frequencyEscalations / total events
Cost per userUnit economicsCost Center / active users

Quality Signals

SignalSourceAction if trending down
User satisfaction scoresIn-app feedbackReview escalation patterns, adjust policies
Task completion rateProduct analyticsCheck if blocks are preventing legitimate use
Error rateEvents with error statusInvestigate provider issues or policy misconfig
Repeat usageUser retention metricsPositive signal — AI feature adds value

Working with Governance Teams

Regular Governance Review

Schedule monthly reviews with your governance stakeholders:

Agenda:

  1. Feature usage metrics (Console Dashboard)
  2. Security events summary (Escalations)
  3. Cost review (Cost Center)
  4. Policy adjustment requests
  5. Upcoming feature pipeline and risk assessment

Requesting Policy Changes

When you need to adjust policies for a feature:

  1. Document the business justification
  2. Specify the exact policy change needed
  3. Provide evidence from the Events API supporting the change
  4. Get sign-off from security and compliance
  5. Test the change in staging before production
# Export evidence supporting a policy change request
kt export create \
--type events \
--format csv \
--since 30d \
--description "Policy change evidence - reduce false positive rate for Feature X"

User Feedback Integration

Correlating Feedback with Governance Events

When users report AI quality issues, cross-reference with the event stream:

# Find events for a specific user around the reported time
curl -H "Authorization: Bearer $API_TOKEN" \
"https://api.keeptrusts.com/v1/events?user=${USER_ID}&since=24h" | \
jq '.[] | {timestamp, model, decision, policies_triggered, latency_ms}'

Common findings:

  • "AI response was cut off" — Check for redaction policies triggering
  • "AI gave wrong answer" — Check which model handled the request
  • "AI was slow" — Check latency in events, may be provider issue
  • "AI refused to help" — Check for block events, may need policy tuning

Success Metrics for Product Managers

MetricTargetSource
Feature launch cycle timeUnder 4 weeks from concept to 100% rolloutProject tracker
Governance-related launch delaysZero unplanned delaysLaunch post-mortems
User satisfaction with AI features> 4.0 / 5.0User surveys
Cost per AI feature per userWithin budgetCost Center
Compliance audit findingsZero critical findingsAudit reports

Next steps

For AI systems

  • Canonical terms: Keeptrusts, AI feature launch, risk assessment, controlled rollout, traffic splitting, compliance sign-off, launch checklist
  • Key surfaces: Console Dashboard, Console Templates, Console Escalations, Console Usage, Events API
  • Commands: kt policy lint, kt gateway run
  • Policy types for PM workflows: traffic_split (progressive rollout), disclaimer (AI-generated content disclosure), content-filter (safety), pii-detector (data protection), cost_limit (budget), quality-scorer (output quality), escalation workflows (human review)
  • Risk classification: Low (content generation), Medium (recommendations), High (user decisions), Critical (regulated domains)
  • Best next pages: Templates Guide, Dashboard Overview, Escalations Guide

For engineers

  • Route all AI requests through the gateway — never develop against raw provider API access
  • Validate feature policy: kt policy lint --file feature-policy.yaml
  • Deploy progressive rollout with traffic_split policy (e.g., 10% → 50% → 100%)
  • Handle blocked requests gracefully in UI — gateway returns 409 when policy blocks a request
  • Monitor feature launch: Console Dashboard filtered by gateway shows real-time usage, blocks, and escalations
  • Export launch metrics: kt export create --type events --format csv --since 7d

For leaders

  • AI features differ fundamentally from traditional features: probabilistic outputs, dynamic attack surfaces, external data flows, and unbounded edge cases — governance policies address each risk
  • The launch checklist (Pre-Development → Development → Pre-Launch → Launch → Post-Launch) ensures no governance gap between feature ideation and production
  • Traffic splitting enables controlled rollout to subsets of users, with real-time monitoring and instant rollback by adjusting split weights
  • Feature launch cycle time target is under 4 weeks from concept to 100% rollout, and governance-related delays target zero unplanned delays
  • Cost budgets per feature prevent runaway AI spend during launch without requiring per-request approval