Skip to main content
Browse docs
By Audience
Getting Started
Configuration
Use Cases
IDE Integration
Third-Party Integrations
Engineering Cache
Console
API Reference
Gateway
Workflow Guides
Templates
Providers and SDKs
Industry Guides
Advanced Guides
Browse by Role
Deployment Guides
In-Depth Guides
Tutorials
FAQ

Solutions Architect Guide: Enterprise AI Integration

As a Solutions Architect designing enterprise AI deployments, you bridge the gap between business requirements and technical implementation. Keeptrusts provides the governance layer that makes AI adoption architecturally sound — with clear integration patterns, scalable deployment models, and measurable PoC outcomes.

Use this page when

  • You are designing a reference architecture for enterprise AI governance
  • You need to choose between single-app, shared gateway, or federated gateway topologies
  • You are planning a PoC deployment to validate Keeptrusts in your environment
  • You want to integrate AI governance using the proxy pattern (no application code changes)
  • You are designing a migration strategy from ungoverned AI to governed AI

Primary audience

  • Primary: Technical Engineers (Solutions Architects, Enterprise Architects)
  • Secondary: Cloud Architects, Platform Engineers, CTOs

Reference Architectures

Architecture 1: Single-Application Gateway

Simplest deployment — one application, one gateway, governed AI access:

┌─────────────────────────────────────────┐
│ Application │
│ │
│ Service → Keeptrusts Gateway → LLM API │
│ │ │
│ └── Events → API │
└─────────────────────────────────────────┘

When to use: PoC, single-team deployments, initial evaluations.

# Minimal gateway deployment for a single app
kt policy lint --file app-policy.yaml
kt gateway run --policy-config app-policy.yaml --port 41002

Architecture 2: Shared Gateway Platform

Multiple applications share a centralized gateway cluster:

App A ──┐
App B ──┼── Load Balancer ── Gateway Cluster ── LLM APIs
App C ──┘ │
└── Control-Plane API

Console (mgmt)

When to use: Multi-team environments, organization-wide governance, centralized policy management.

Architecture 3: Federated Gateway Model

Teams operate independent gateways but share a central control plane:

Team A: App → Gateway A ──┐
Team B: App → Gateway B ──┼── Control-Plane API
Team C: App → Gateway C ──┘ │
Console (mgmt)

When to use: Autonomous teams, different compliance requirements per team, multi-region deployments.

Each gateway reports events to the central API. The Console provides aggregate visibility across all gateways.

Integration Patterns

Pattern 1: Proxy Integration

The gateway acts as a drop-in proxy. Applications point their LLM SDK to the gateway endpoint instead of directly to the provider:

# Before: direct to OpenAI
export OPENAI_BASE_URL=https://api.openai.com/v1

# After: through Keeptrusts gateway
export OPENAI_BASE_URL=http://gateway.internal:41002/v1

No application code changes required. Governance is transparent.

Pattern 2: API-First Integration

Applications use the Keeptrusts API directly for programmatic access to governance data:

# Query events for integration monitoring
curl -H "Authorization: Bearer $API_TOKEN" \
"https://api.keeptrusts.com/v1/events?since=24h&limit=50"

# Check escalation status programmatically
curl -H "Authorization: Bearer $API_TOKEN" \
"https://api.keeptrusts.com/v1/escalations?status=pending"

Pattern 3: Git-Linked Configuration

Store policy configurations in your existing infrastructure-as-code repository:

# Validate configurations in CI pipeline
kt policy lint --file configs/production-policy.yaml
kt policy lint --file configs/staging-policy.yaml

Link repositories through the Console Settings to automatically sync configuration changes to deployed gateways.

Pattern 4: Event-Driven Integration

Export Keeptrusts events to your data pipeline for custom analytics:

# Create a recurring export for your data warehouse
kt export create \
--type events \
--format csv \
--since 24h \
--description "Daily event feed for analytics pipeline"

Designing a PoC

PoC Success Criteria

Define measurable outcomes before starting:

CriteriaMetricTarget
Policy enforcementEvents processed without error> 99.5%
Latency overheadGateway added latency< 50ms p95
Detection accuracyPII correctly identified> 95% true positive
Integration effortTime to integrate first app< 1 day
CoveragePolicies covering required risk categories100%

PoC Implementation Steps

Week 1: Foundation

  1. Deploy a single gateway with baseline policies:
policies:
- name: poc-pii-detection
type: pii-detector
action: redact
entity_types: [name, email, phone]
enabled: true

- name: poc-content-safety
type: content-filter
categories: [harmful]
action: block
enabled: true

- name: poc-injection-protection
type: prompt-injection
action: block
enabled: true
  1. Point a single test application to the gateway
  2. Verify events flow to the Console Dashboard
kt gateway run --policy-config poc-policy.yaml --port 41002
kt doctor
kt events list --since 1h

Week 2: Validation

  1. Expand policies to cover all required risk categories
  2. Measure latency impact and detection accuracy
  3. Test escalation workflows end-to-end

Week 3: Reporting

  1. Export PoC metrics for stakeholder review:
kt export create \
--type events \
--format csv \
--since 21d \
--description "PoC results — 3-week pilot"

Scalability Planning

Scaling the Gateway

DimensionStrategyConfiguration
ThroughputHorizontal scaling — multiple gateway instancesLoad balancer in front of gateway cluster
TeamsTeam-scoped configurationsPer-team policy files or Console Templates
ProvidersMulti-provider gateway configMultiple provider entries in policy YAML
RegionsRegional gateway deploymentsOne gateway per region, shared control plane

Capacity Planning Inputs

Use Keeptrusts event data to plan capacity:

# Current throughput baseline
curl -H "Authorization: Bearer $API_TOKEN" \
"https://api.keeptrusts.com/v1/events?since=7d&group_by=gateway"

# Peak usage patterns
curl -H "Authorization: Bearer $API_TOKEN" \
"https://api.keeptrusts.com/v1/events?since=30d"

Growth Projections

PhaseUsersRequests/dayGatewaysControl Points
Pilot10-501K-5K1Basic policies
Departmental50-5005K-50K2-5Team-scoped policies
Enterprise500+50K+5+Full policy stack, multi-region

Migration Planning

Migrating from Direct LLM Access

For organizations moving from unmanaged LLM access to governed access through Keeptrusts:

Phase 1: Shadow mode (Week 1-2)

  • Deploy gateway alongside existing direct access
  • Configure policies in log mode (monitor without blocking)
  • Baseline current usage patterns

Phase 2: Gradual migration (Week 3-4)

  • Migrate teams one at a time to the gateway endpoint
  • Enable enforcement policies progressively
  • Monitor for false positives and adjust thresholds

Phase 3: Full enforcement (Week 5+)

  • Block direct LLM access at the network level
  • All traffic routes through governed gateways
  • Decommission legacy access patterns
# Validate migration readiness
kt policy lint --file production-policy.yaml
kt doctor

Migration Checklist

  • All LLM providers configured in gateway
  • Policy configurations validated and tested
  • Application teams briefed on endpoint changes
  • Escalation workflows configured and tested
  • Monitoring dashboards set up in Console
  • Rollback plan documented
  • Network policies updated to restrict direct LLM access

Stakeholder Communication

Architecture Decision Summary

Present the Keeptrusts integration to stakeholders with clear value mapping:

StakeholderKey MessageEvidence
CTOUnified governance without slowing deliveryGateway latency < 50ms, self-service templates
CISOSecurity controls on all AI trafficPolicy enforcement rates, prompt injection detection
VP EngineeringNo code changes, minimal integration effortProxy integration pattern
LegalComplete audit trail for complianceEvent exports, audit log
FinanceCost visibility and controlConsole Usage

Solutions Architect Workflow

TaskFrequencyTool
Design integration architecturePer engagementReference architectures above
Validate deployment configurationsPer deploymentkt policy lint
Monitor PoC progressDaily during PoCConsole Dashboard
Capacity planningQuarterlyEvent volume analysis
Architecture reviewsMonthlyConsole + event exports

Success Metrics for Solutions Architecture

MetricTargetSource
Integration time per application< 1 dayDeployment tracker
PoC to production conversion> 80%Engagement records
Gateway latency overhead< 50ms p95Performance monitoring
Policy coverage at go-live100% of required categoriesConfiguration audit
Stakeholder satisfactionPositive post-deployment reviewFeedback survey

For AI systems

  • Canonical terms: Keeptrusts, reference architecture, integration patterns, PoC deployment, enterprise AI integration, proxy pattern, federated gateway
  • Key surfaces: Console Dashboard (aggregate visibility), Console Configurations, Events API, kt gateway run, kt policy lint, kt doctor
  • Architectures: Single-Application Gateway (PoC), Shared Gateway Platform (multi-team), Federated Gateway Model (autonomous teams)
  • Integration patterns: Proxy (change OPENAI_BASE_URL only), API-first (programmatic governance data access)
  • PoC scope: single app, one gateway, 2-4 week evaluation period
  • Best next pages: Architecture Overview, Quickstart, Cloud Architect Guide, Platform Engineer Guide, DevOps Guide

For engineers

  • Proxy integration (zero code changes): export OPENAI_BASE_URL=http://gateway.internal:41002/v1
  • PoC deployment: kt policy lint --file app-policy.yaml && kt gateway run --policy-config app-policy.yaml --port 41002
  • Verify integration: kt doctor and kt events list --since 24h --limit 50
  • Federated model: each team runs kt gateway run with own config; all report to central Control-Plane API
  • Console provides aggregate visibility across all gateways for centralized monitoring
  • Scale path: single gateway → load-balanced cluster → federated fleet

For leaders

  • The proxy integration pattern means zero application code changes — governance is transparent to development teams, reducing adoption friction to near-zero
  • Three reference architectures (Single-App, Shared Platform, Federated) map to different organizational maturity and autonomy requirements
  • PoC deployments can validate the governance value proposition in 2-4 weeks with a single application and gateway
  • Migration from ungoverned to governed AI is progressive: start with observation-only policies, then introduce enforcement without disrupting existing workflows
  • The federated model supports autonomous teams with different compliance requirements while maintaining centralized visibility and reporting

Next steps