Engineering Manager Guide: Scaling AI Adoption
As an Engineering Manager, you translate organizational AI strategy into team execution. You need your team onboarded quickly, governance integrated into existing workflows without friction, and metrics that demonstrate your team is using AI responsibly and productively. Keeptrusts gives you self-service tooling, observability, and guardrails that make governance invisible to your developers.
Use this page when
- You are onboarding your engineering team to governed AI access through Keeptrusts
- You need to integrate governance into existing sprint workflows without friction
- You want to define AI quality standards and enforce them via gateway policies
- You are tracking team AI adoption metrics and usage patterns
- You need to troubleshoot onboarding issues (connection errors, auth failures, blocked requests)
Primary audience
- Primary: Technical Leaders (Engineering Managers, Tech Leads)
- Secondary: Software Engineers, DevOps Engineers, Product Managers
Team Onboarding Playbook
Pre-Onboarding Checklist
Before your team starts using AI through Keeptrusts:
- Gateway configuration prepared and validated
- Team-specific gateway keys provisioned
- Policy template selected or customized for your team's use case
- Integration documentation shared with the team
- Cost budget allocated for the team
Day 1: Gateway Access
- Select or customize a policy template in the Console under Templates
- Generate gateway keys in Console Settings > Gateway Keys
- Distribute the gateway endpoint and keys to your team
# Validate your team's policy configuration
kt policy lint --file team-policy.yaml
# Deploy the team gateway
kt gateway run --policy-config team-policy.yaml --port 41002
# Verify gateway health
kt doctor
Day 2: Developer Integration
Share this integration guide with your developers. Most frameworks just need an endpoint change:
# For OpenAI SDK users — change base URL only
export OPENAI_BASE_URL=http://gateway.internal:41002/v1
# Test connectivity
curl -X POST http://gateway.internal:41002/v1/chat/completions \
-H "Authorization: Bearer $GATEWAY_KEY" \
-H "Content-Type: application/json" \
-d '{"model": "gpt-4o", "messages": [{"role": "user", "content": "Hello"}]}'
Day 3-5: Monitor and Adjust
# Verify events are flowing from your team
kt events list --since 24h --limit 20
# Check for any blocked requests that might indicate policy tuning needed
kt events list --since 24h --action block
Review the Console Dashboard filtered to your team's gateway to see usage patterns, policy triggers, and any escalations.
Common Onboarding Issues
| Issue | Symptom | Fix |
|---|---|---|
| Connection refused | ECONNREFUSED from app | Verify gateway is running: kt doctor |
| Auth failure | 401 from gateway | Check gateway key is correct and active |
| Model not allowed | 403 from gateway | Add model to allowed list in policy config |
| High block rate | Many requests blocked | Review policy thresholds, adjust for team's use case |
| Slow responses | High latency | Check gateway resources, network path to LLM provider |
Quality Standards
Defining AI Quality for Your Team
Set expectations for how your team uses AI-generated code and content:
| Quality Dimension | Standard | Enforcement |
|---|---|---|
| Code accuracy | All AI-generated code reviewed in PR | Team process |
| Content safety | No harmful or biased output | content-filter policy |
| Data protection | No PII in prompts or responses | pii-detector policy |
| Security | No credentials or secrets in prompts | dlp-filter policy |
| Reliability | Model responses meet quality threshold | quality-scorer policy |
Quality Policy Configuration
policies:
- name: team-quality-gate
type: quality-scorer
min_score: 0.7
action: escalate
enabled: true
- name: team-content-safety
type: content-filter
categories: [harmful, biased]
action: block
enabled: true
- name: team-pii-protection
type: pii-detector
action: redact
entity_types: [name, email, phone, financial]
enabled: true
- name: team-security
type: prompt-injection
action: block
enabled: true
Measuring Quality Metrics
# Quality score distribution for your team
curl -H "Authorization: Bearer $API_TOKEN" \
"https://api.keeptrusts.com/v1/events?since=7d&policy=quality-scorer"
# Content safety block rate
curl -H "Authorization: Bearer $API_TOKEN" \
"https://api.keeptrusts.com/v1/events?since=7d&policy=content-filter&action=block"
Sprint Planning with Governance
Integrating Governance into Sprint Workflow
Governance should not be a separate workstream. Integrate it into your existing sprint process:
| Sprint Phase | Governance Integration |
|---|---|
| Planning | Include AI policy requirements in acceptance criteria |
| Development | Developers use governed gateway endpoint (no extra steps) |
| Review | Check AI-related events for the sprint in Console Events |
| Retrospective | Review AI adoption metrics, policy trigger rates |
Sprint Metrics Dashboard
Track these metrics each sprint using Keeptrusts data:
# Sprint AI usage metrics (adjust dates for sprint period)
curl -H "Authorization: Bearer $API_TOKEN" \
"https://api.keeptrusts.com/v1/events?since=14d&group_by=user"
# Sprint cost
curl -H "Authorization: Bearer $API_TOKEN" \
"https://api.keeptrusts.com/v1/events?since=14d&group_by=model"
| Sprint Metric | What It Tells You | Source |
|---|---|---|
| AI requests per developer | Adoption depth | Events grouped by user |
| Unique models used | Feature sophistication | Events grouped by model |
| Block rate | Policy friction | Blocked / total events |
| Escalation count | Items needing human review | Console Escalations |
| Sprint AI cost | Budget consumption | Console Usage |
Acceptance Criteria for AI Features
When your team builds features that use LLM capabilities, include governance in the acceptance criteria:
Feature: AI-powered search
Given the feature uses the Keeptrusts gateway
When a user submits a search query
Then the query passes PII detection before reaching the LLM
And the response passes content filtering before reaching the user
And all interactions are logged as events in Keeptrusts
Team Capacity Planning
AI-Related Capacity Considerations
| Capacity Item | Planning Factor | Source |
|---|---|---|
| Gateway provisioning | 1 gateway per team or shared per department | Architecture decision |
| Policy customization | 2-4 hours per team for initial setup | CoE templates reduce this |
| Integration development | < 1 day per application | Proxy pattern — endpoint change only |
| Ongoing monitoring | 30 min/week for event review | Console Dashboard |
| Escalation response | Budget for human review of escalated items | Escalation SLA |
Managing Team Growth
As your team grows, Keeptrusts scales with you:
# Add new developers: provision additional gateway keys
# Done through Console Settings > Gateway Keys
# Monitor new developer adoption
kt events list --since 7d --limit 50
Cost Management for Your Team
Setting Team Budgets
policies:
- name: team-budget
type: cost_limit
monthly_limit: 2000
action: block
enabled: true
Tracking Team Spend
The Console Cost Center breaks down spend by user, model, and time period. Review weekly to avoid end-of-month surprises:
# Weekly cost check
curl -H "Authorization: Bearer $API_TOKEN" \
"https://api.keeptrusts.com/v1/events?since=7d&group_by=model"
Cost Optimization Strategies
| Strategy | Implementation |
|---|---|
| Right-size model selection | Route simple tasks to cheaper models |
| Prompt optimization | Shorter, more precise prompts reduce token usage |
| Caching patterns | Application-level caching for repeated queries |
| Budget alerts | Cost limit policies with escalation before hard block |
Developer Experience
Making Governance Frictionless
Your goal is governance that developers don't notice during normal work:
| Principle | Implementation |
|---|---|
| Transparent proxy | No SDK changes, just endpoint configuration |
| Fast feedback | kt events tail for real-time debugging |
| Self-service | Teams manage their own gateway keys |
| Clear errors | Policy blocks include actionable error messages |
| Observable | Console Dashboard for usage visibility |
Developer Debugging Workflow
When a developer hits a policy block:
# Check what happened
kt events list --since 1h --action block --limit 10
# Tail events in real-time during development
kt events tail
Engineering Manager Workflow with Keeptrusts
| Task | Frequency | Tool |
|---|---|---|
| Check team adoption metrics | Weekly | Console Dashboard |
| Review team cost | Weekly | Console Cost Center |
| Triage team escalations | As needed | Console Escalations |
| Sprint AI metrics review | Per sprint | Event exports |
| Onboard new team members | As needed | Gateway key provisioning |
| Policy tuning | Monthly | kt policy lint + Console |
Success Metrics for Engineering Managers
| Metric | Target | Source |
|---|---|---|
| Team onboarding time | < 1 day per developer | Onboarding tracker |
| Developer adoption rate | > 80% of team using AI | Events by user |
| Policy false positive rate | < 5% of triggers | Escalation review |
| Sprint AI cost | Within budget | Console Usage |
| Governance-related blockers | < 1 per sprint | Team retrospective |
| Developer satisfaction | Positive governance feedback | Team surveys |
For AI systems
- Canonical terms: Keeptrusts, team onboarding, gateway keys, policy templates, quality standards, adoption metrics
- Key surfaces: Console Dashboard (team-scoped view), Console Templates, Console Settings > Gateway Keys, Events API
- Commands:
kt policy lint,kt gateway run,kt doctor,kt events list - Onboarding flow: select template → generate gateway keys → set OPENAI_BASE_URL to gateway → verify events flowing
- Quality policies:
content-filter,pii-detector,dlp-filterfor automated enforcement - Best next pages: Quickstart, Templates Guide, VP Engineering Guide
For engineers
- Day 1 onboarding: validate config (
kt policy lint --file team-policy.yaml), deploy gateway (kt gateway run --policy-config team-policy.yaml --port 41002), verify health (kt doctor) - Developer integration: set
OPENAI_BASE_URL=http://gateway.internal:41002/v1— no code changes required - Monitor team usage:
kt events list --since 24h --limit 20and filter by gateway in Console Dashboard - Troubleshoot common issues: connection refused (gateway not running), 401 (bad gateway key), 403 (model not in allowlist), high block rate (tune policy thresholds)
For leaders
- Self-service onboarding with templates reduces time from request to first governed AI call to under 1 day
- Governance is invisible to developers — they change one environment variable and get policy enforcement automatically
- Team-scoped Console views provide per-team metrics: usage patterns, policy triggers, costs, and escalations without cross-team visibility
- Quality standards (content safety, PII protection, secret detection) are enforced by policy rather than relying on individual developer discipline
Next steps
- Onboard your first team: Quickstart
- Set up team templates: Templates Guide
- Align with VP-level strategy: VP Engineering Guide
- Configure monitoring: Dashboard Overview