Skip to main content
Browse docs
By Audience
Getting Started
Configuration
Use Cases
IDE Integration
Third-Party Integrations
Engineering Cache
Console
API Reference
Gateway
Workflow Guides
Templates
Providers and SDKs
Industry Guides
Advanced Guides
Browse by Role
Deployment Guides
In-Depth Guides
Tutorials
FAQ

Engineering Manager Guide: Scaling AI Adoption

As an Engineering Manager, you translate organizational AI strategy into team execution. You need your team onboarded quickly, governance integrated into existing workflows without friction, and metrics that demonstrate your team is using AI responsibly and productively. Keeptrusts gives you self-service tooling, observability, and guardrails that make governance invisible to your developers.

Use this page when

  • You are onboarding your engineering team to governed AI access through Keeptrusts
  • You need to integrate governance into existing sprint workflows without friction
  • You want to define AI quality standards and enforce them via gateway policies
  • You are tracking team AI adoption metrics and usage patterns
  • You need to troubleshoot onboarding issues (connection errors, auth failures, blocked requests)

Primary audience

  • Primary: Technical Leaders (Engineering Managers, Tech Leads)
  • Secondary: Software Engineers, DevOps Engineers, Product Managers

Team Onboarding Playbook

Pre-Onboarding Checklist

Before your team starts using AI through Keeptrusts:

  • Gateway configuration prepared and validated
  • Team-specific gateway keys provisioned
  • Policy template selected or customized for your team's use case
  • Integration documentation shared with the team
  • Cost budget allocated for the team

Day 1: Gateway Access

  1. Select or customize a policy template in the Console under Templates
  2. Generate gateway keys in Console Settings > Gateway Keys
  3. Distribute the gateway endpoint and keys to your team
# Validate your team's policy configuration
kt policy lint --file team-policy.yaml

# Deploy the team gateway
kt gateway run --policy-config team-policy.yaml --port 41002

# Verify gateway health
kt doctor

Day 2: Developer Integration

Share this integration guide with your developers. Most frameworks just need an endpoint change:

# For OpenAI SDK users — change base URL only
export OPENAI_BASE_URL=http://gateway.internal:41002/v1

# Test connectivity
curl -X POST http://gateway.internal:41002/v1/chat/completions \
-H "Authorization: Bearer $GATEWAY_KEY" \
-H "Content-Type: application/json" \
-d '{"model": "gpt-4o", "messages": [{"role": "user", "content": "Hello"}]}'

Day 3-5: Monitor and Adjust

# Verify events are flowing from your team
kt events list --since 24h --limit 20

# Check for any blocked requests that might indicate policy tuning needed
kt events list --since 24h --action block

Review the Console Dashboard filtered to your team's gateway to see usage patterns, policy triggers, and any escalations.

Common Onboarding Issues

IssueSymptomFix
Connection refusedECONNREFUSED from appVerify gateway is running: kt doctor
Auth failure401 from gatewayCheck gateway key is correct and active
Model not allowed403 from gatewayAdd model to allowed list in policy config
High block rateMany requests blockedReview policy thresholds, adjust for team's use case
Slow responsesHigh latencyCheck gateway resources, network path to LLM provider

Quality Standards

Defining AI Quality for Your Team

Set expectations for how your team uses AI-generated code and content:

Quality DimensionStandardEnforcement
Code accuracyAll AI-generated code reviewed in PRTeam process
Content safetyNo harmful or biased outputcontent-filter policy
Data protectionNo PII in prompts or responsespii-detector policy
SecurityNo credentials or secrets in promptsdlp-filter policy
ReliabilityModel responses meet quality thresholdquality-scorer policy

Quality Policy Configuration

policies:
- name: team-quality-gate
type: quality-scorer
min_score: 0.7
action: escalate
enabled: true

- name: team-content-safety
type: content-filter
categories: [harmful, biased]
action: block
enabled: true

- name: team-pii-protection
type: pii-detector
action: redact
entity_types: [name, email, phone, financial]
enabled: true

- name: team-security
type: prompt-injection
action: block
enabled: true

Measuring Quality Metrics

# Quality score distribution for your team
curl -H "Authorization: Bearer $API_TOKEN" \
"https://api.keeptrusts.com/v1/events?since=7d&policy=quality-scorer"

# Content safety block rate
curl -H "Authorization: Bearer $API_TOKEN" \
"https://api.keeptrusts.com/v1/events?since=7d&policy=content-filter&action=block"

Sprint Planning with Governance

Integrating Governance into Sprint Workflow

Governance should not be a separate workstream. Integrate it into your existing sprint process:

Sprint PhaseGovernance Integration
PlanningInclude AI policy requirements in acceptance criteria
DevelopmentDevelopers use governed gateway endpoint (no extra steps)
ReviewCheck AI-related events for the sprint in Console Events
RetrospectiveReview AI adoption metrics, policy trigger rates

Sprint Metrics Dashboard

Track these metrics each sprint using Keeptrusts data:

# Sprint AI usage metrics (adjust dates for sprint period)
curl -H "Authorization: Bearer $API_TOKEN" \
"https://api.keeptrusts.com/v1/events?since=14d&group_by=user"

# Sprint cost
curl -H "Authorization: Bearer $API_TOKEN" \
"https://api.keeptrusts.com/v1/events?since=14d&group_by=model"
Sprint MetricWhat It Tells YouSource
AI requests per developerAdoption depthEvents grouped by user
Unique models usedFeature sophisticationEvents grouped by model
Block ratePolicy frictionBlocked / total events
Escalation countItems needing human reviewConsole Escalations
Sprint AI costBudget consumptionConsole Usage

Acceptance Criteria for AI Features

When your team builds features that use LLM capabilities, include governance in the acceptance criteria:

Feature: AI-powered search
Given the feature uses the Keeptrusts gateway
When a user submits a search query
Then the query passes PII detection before reaching the LLM
And the response passes content filtering before reaching the user
And all interactions are logged as events in Keeptrusts

Team Capacity Planning

Capacity ItemPlanning FactorSource
Gateway provisioning1 gateway per team or shared per departmentArchitecture decision
Policy customization2-4 hours per team for initial setupCoE templates reduce this
Integration development< 1 day per applicationProxy pattern — endpoint change only
Ongoing monitoring30 min/week for event reviewConsole Dashboard
Escalation responseBudget for human review of escalated itemsEscalation SLA

Managing Team Growth

As your team grows, Keeptrusts scales with you:

# Add new developers: provision additional gateway keys
# Done through Console Settings > Gateway Keys

# Monitor new developer adoption
kt events list --since 7d --limit 50

Cost Management for Your Team

Setting Team Budgets

policies:
- name: team-budget
type: cost_limit
monthly_limit: 2000
action: block
enabled: true

Tracking Team Spend

The Console Cost Center breaks down spend by user, model, and time period. Review weekly to avoid end-of-month surprises:

# Weekly cost check
curl -H "Authorization: Bearer $API_TOKEN" \
"https://api.keeptrusts.com/v1/events?since=7d&group_by=model"

Cost Optimization Strategies

StrategyImplementation
Right-size model selectionRoute simple tasks to cheaper models
Prompt optimizationShorter, more precise prompts reduce token usage
Caching patternsApplication-level caching for repeated queries
Budget alertsCost limit policies with escalation before hard block

Developer Experience

Making Governance Frictionless

Your goal is governance that developers don't notice during normal work:

PrincipleImplementation
Transparent proxyNo SDK changes, just endpoint configuration
Fast feedbackkt events tail for real-time debugging
Self-serviceTeams manage their own gateway keys
Clear errorsPolicy blocks include actionable error messages
ObservableConsole Dashboard for usage visibility

Developer Debugging Workflow

When a developer hits a policy block:

# Check what happened
kt events list --since 1h --action block --limit 10

# Tail events in real-time during development
kt events tail

Engineering Manager Workflow with Keeptrusts

TaskFrequencyTool
Check team adoption metricsWeeklyConsole Dashboard
Review team costWeeklyConsole Cost Center
Triage team escalationsAs neededConsole Escalations
Sprint AI metrics reviewPer sprintEvent exports
Onboard new team membersAs neededGateway key provisioning
Policy tuningMonthlykt policy lint + Console

Success Metrics for Engineering Managers

MetricTargetSource
Team onboarding time< 1 day per developerOnboarding tracker
Developer adoption rate> 80% of team using AIEvents by user
Policy false positive rate< 5% of triggersEscalation review
Sprint AI costWithin budgetConsole Usage
Governance-related blockers< 1 per sprintTeam retrospective
Developer satisfactionPositive governance feedbackTeam surveys

For AI systems

  • Canonical terms: Keeptrusts, team onboarding, gateway keys, policy templates, quality standards, adoption metrics
  • Key surfaces: Console Dashboard (team-scoped view), Console Templates, Console Settings > Gateway Keys, Events API
  • Commands: kt policy lint, kt gateway run, kt doctor, kt events list
  • Onboarding flow: select template → generate gateway keys → set OPENAI_BASE_URL to gateway → verify events flowing
  • Quality policies: content-filter, pii-detector, dlp-filter for automated enforcement
  • Best next pages: Quickstart, Templates Guide, VP Engineering Guide

For engineers

  • Day 1 onboarding: validate config (kt policy lint --file team-policy.yaml), deploy gateway (kt gateway run --policy-config team-policy.yaml --port 41002), verify health (kt doctor)
  • Developer integration: set OPENAI_BASE_URL=http://gateway.internal:41002/v1 — no code changes required
  • Monitor team usage: kt events list --since 24h --limit 20 and filter by gateway in Console Dashboard
  • Troubleshoot common issues: connection refused (gateway not running), 401 (bad gateway key), 403 (model not in allowlist), high block rate (tune policy thresholds)

For leaders

  • Self-service onboarding with templates reduces time from request to first governed AI call to under 1 day
  • Governance is invisible to developers — they change one environment variable and get policy enforcement automatically
  • Team-scoped Console views provide per-team metrics: usage patterns, policy triggers, costs, and escalations without cross-team visibility
  • Quality standards (content safety, PII protection, secret detection) are enforced by policy rather than relying on individual developer discipline

Next steps