Solutions Architect Guide: Enterprise AI Integration
As a Solutions Architect designing enterprise AI deployments, you bridge the gap between business requirements and technical implementation. Keeptrusts provides the governance layer that makes AI adoption architecturally sound — with clear integration patterns, scalable deployment models, and measurable PoC outcomes.
Use this page when
- You are designing a reference architecture for enterprise AI governance
- You need to choose between single-app, shared gateway, or federated gateway topologies
- You are planning a PoC deployment to validate Keeptrusts in your environment
- You want to integrate AI governance using the proxy pattern (no application code changes)
- You are designing a migration strategy from ungoverned AI to governed AI
Primary audience
- Primary: Technical Engineers (Solutions Architects, Enterprise Architects)
- Secondary: Cloud Architects, Platform Engineers, CTOs
Reference Architectures
Architecture 1: Single-Application Gateway
Simplest deployment — one application, one gateway, governed AI access:
┌─────────────────────────────────────────┐
│ Application │
│ │
│ Service → Keeptrusts Gateway → LLM API │
│ │ │
│ └── Events → API │
└─────────────────────────────────────────┘
When to use: PoC, single-team deployments, initial evaluations.
# Minimal gateway deployment for a single app
kt policy lint --file app-policy.yaml
kt gateway run --policy-config app-policy.yaml --port 41002
Architecture 2: Shared Gateway Platform
Multiple applications share a centralized gateway cluster:
App A ──┐
App B ──┼── Load Balancer ── Gateway Cluster ── LLM APIs
App C ──┘ │
└── Control-Plane API
│
Console (mgmt)
When to use: Multi-team environments, organization-wide governance, centralized policy management.
Architecture 3: Federated Gateway Model
Teams operate independent gateways but share a central control plane:
Team A: App → Gateway A ──┐
Team B: App → Gateway B ──┼── Control-Plane API
Team C: App → Gateway C ──┘ │
Console (mgmt)
When to use: Autonomous teams, different compliance requirements per team, multi-region deployments.
Each gateway reports events to the central API. The Console provides aggregate visibility across all gateways.
Integration Patterns
Pattern 1: Proxy Integration
The gateway acts as a drop-in proxy. Applications point their LLM SDK to the gateway endpoint instead of directly to the provider:
# Before: direct to OpenAI
export OPENAI_BASE_URL=https://api.openai.com/v1
# After: through Keeptrusts gateway
export OPENAI_BASE_URL=http://gateway.internal:41002/v1
No application code changes required. Governance is transparent.
Pattern 2: API-First Integration
Applications use the Keeptrusts API directly for programmatic access to governance data:
# Query events for integration monitoring
curl -H "Authorization: Bearer $API_TOKEN" \
"https://api.keeptrusts.com/v1/events?since=24h&limit=50"
# Check escalation status programmatically
curl -H "Authorization: Bearer $API_TOKEN" \
"https://api.keeptrusts.com/v1/escalations?status=pending"
Pattern 3: Git-Linked Configuration
Store policy configurations in your existing infrastructure-as-code repository:
# Validate configurations in CI pipeline
kt policy lint --file configs/production-policy.yaml
kt policy lint --file configs/staging-policy.yaml
Link repositories through the Console Settings to automatically sync configuration changes to deployed gateways.
Pattern 4: Event-Driven Integration
Export Keeptrusts events to your data pipeline for custom analytics:
# Create a recurring export for your data warehouse
kt export create \
--type events \
--format csv \
--since 24h \
--description "Daily event feed for analytics pipeline"
Designing a PoC
PoC Success Criteria
Define measurable outcomes before starting:
| Criteria | Metric | Target |
|---|---|---|
| Policy enforcement | Events processed without error | > 99.5% |
| Latency overhead | Gateway added latency | < 50ms p95 |
| Detection accuracy | PII correctly identified | > 95% true positive |
| Integration effort | Time to integrate first app | < 1 day |
| Coverage | Policies covering required risk categories | 100% |
PoC Implementation Steps
Week 1: Foundation
- Deploy a single gateway with baseline policies:
policies:
- name: poc-pii-detection
type: pii-detector
action: redact
entity_types: [name, email, phone]
enabled: true
- name: poc-content-safety
type: content-filter
categories: [harmful]
action: block
enabled: true
- name: poc-injection-protection
type: prompt-injection
action: block
enabled: true
- Point a single test application to the gateway
- Verify events flow to the Console Dashboard
kt gateway run --policy-config poc-policy.yaml --port 41002
kt doctor
kt events list --since 1h
Week 2: Validation
- Expand policies to cover all required risk categories
- Measure latency impact and detection accuracy
- Test escalation workflows end-to-end
Week 3: Reporting
- Export PoC metrics for stakeholder review:
kt export create \
--type events \
--format csv \
--since 21d \
--description "PoC results — 3-week pilot"
Scalability Planning
Scaling the Gateway
| Dimension | Strategy | Configuration |
|---|---|---|
| Throughput | Horizontal scaling — multiple gateway instances | Load balancer in front of gateway cluster |
| Teams | Team-scoped configurations | Per-team policy files or Console Templates |
| Providers | Multi-provider gateway config | Multiple provider entries in policy YAML |
| Regions | Regional gateway deployments | One gateway per region, shared control plane |
Capacity Planning Inputs
Use Keeptrusts event data to plan capacity:
# Current throughput baseline
curl -H "Authorization: Bearer $API_TOKEN" \
"https://api.keeptrusts.com/v1/events?since=7d&group_by=gateway"
# Peak usage patterns
curl -H "Authorization: Bearer $API_TOKEN" \
"https://api.keeptrusts.com/v1/events?since=30d"
Growth Projections
| Phase | Users | Requests/day | Gateways | Control Points |
|---|---|---|---|---|
| Pilot | 10-50 | 1K-5K | 1 | Basic policies |
| Departmental | 50-500 | 5K-50K | 2-5 | Team-scoped policies |
| Enterprise | 500+ | 50K+ | 5+ | Full policy stack, multi-region |
Migration Planning
Migrating from Direct LLM Access
For organizations moving from unmanaged LLM access to governed access through Keeptrusts:
Phase 1: Shadow mode (Week 1-2)
- Deploy gateway alongside existing direct access
- Configure policies in
logmode (monitor without blocking) - Baseline current usage patterns
Phase 2: Gradual migration (Week 3-4)
- Migrate teams one at a time to the gateway endpoint
- Enable enforcement policies progressively
- Monitor for false positives and adjust thresholds
Phase 3: Full enforcement (Week 5+)
- Block direct LLM access at the network level
- All traffic routes through governed gateways
- Decommission legacy access patterns
# Validate migration readiness
kt policy lint --file production-policy.yaml
kt doctor
Migration Checklist
- All LLM providers configured in gateway
- Policy configurations validated and tested
- Application teams briefed on endpoint changes
- Escalation workflows configured and tested
- Monitoring dashboards set up in Console
- Rollback plan documented
- Network policies updated to restrict direct LLM access
Stakeholder Communication
Architecture Decision Summary
Present the Keeptrusts integration to stakeholders with clear value mapping:
| Stakeholder | Key Message | Evidence |
|---|---|---|
| CTO | Unified governance without slowing delivery | Gateway latency < 50ms, self-service templates |
| CISO | Security controls on all AI traffic | Policy enforcement rates, prompt injection detection |
| VP Engineering | No code changes, minimal integration effort | Proxy integration pattern |
| Legal | Complete audit trail for compliance | Event exports, audit log |
| Finance | Cost visibility and control | Console Usage |
Solutions Architect Workflow
| Task | Frequency | Tool |
|---|---|---|
| Design integration architecture | Per engagement | Reference architectures above |
| Validate deployment configurations | Per deployment | kt policy lint |
| Monitor PoC progress | Daily during PoC | Console Dashboard |
| Capacity planning | Quarterly | Event volume analysis |
| Architecture reviews | Monthly | Console + event exports |
Success Metrics for Solutions Architecture
| Metric | Target | Source |
|---|---|---|
| Integration time per application | < 1 day | Deployment tracker |
| PoC to production conversion | > 80% | Engagement records |
| Gateway latency overhead | < 50ms p95 | Performance monitoring |
| Policy coverage at go-live | 100% of required categories | Configuration audit |
| Stakeholder satisfaction | Positive post-deployment review | Feedback survey |
For AI systems
- Canonical terms: Keeptrusts, reference architecture, integration patterns, PoC deployment, enterprise AI integration, proxy pattern, federated gateway
- Key surfaces: Console Dashboard (aggregate visibility), Console Configurations, Events API,
kt gateway run,kt policy lint,kt doctor - Architectures: Single-Application Gateway (PoC), Shared Gateway Platform (multi-team), Federated Gateway Model (autonomous teams)
- Integration patterns: Proxy (change OPENAI_BASE_URL only), API-first (programmatic governance data access)
- PoC scope: single app, one gateway, 2-4 week evaluation period
- Best next pages: Architecture Overview, Quickstart, Cloud Architect Guide, Platform Engineer Guide, DevOps Guide
For engineers
- Proxy integration (zero code changes):
export OPENAI_BASE_URL=http://gateway.internal:41002/v1 - PoC deployment:
kt policy lint --file app-policy.yaml && kt gateway run --policy-config app-policy.yaml --port 41002 - Verify integration:
kt doctorandkt events list --since 24h --limit 50 - Federated model: each team runs
kt gateway runwith own config; all report to central Control-Plane API - Console provides aggregate visibility across all gateways for centralized monitoring
- Scale path: single gateway → load-balanced cluster → federated fleet
For leaders
- The proxy integration pattern means zero application code changes — governance is transparent to development teams, reducing adoption friction to near-zero
- Three reference architectures (Single-App, Shared Platform, Federated) map to different organizational maturity and autonomy requirements
- PoC deployments can validate the governance value proposition in 2-4 weeks with a single application and gateway
- Migration from ungoverned to governed AI is progressive: start with observation-only policies, then introduce enforcement without disrupting existing workflows
- The federated model supports autonomous teams with different compliance requirements while maintaining centralized visibility and reporting
Next steps
- Architecture overview: Architecture Overview
- Start a PoC: Quickstart
- Multi-cloud patterns: Cloud Architect Guide
- Multi-tenant platform: Platform Engineer Guide
- Production operations: DevOps Guide