Skip to main content
Browse docs
By Audience
Getting Started
Configuration
Use Cases
IDE Integration
Third-Party Integrations
Engineering Cache
Console
API Reference
Gateway
Workflow Guides
Templates
Providers and SDKs
Industry Guides
Advanced Guides
Browse by Role
Deployment Guides
In-Depth Guides
Tutorials
FAQ

Cloud Architect Guide: Multi-Cloud AI Governance

As a Cloud Architect designing AI infrastructure, you need to abstract provider dependencies, enforce data residency requirements, design failover architectures, and optimize costs across multiple clouds and LLM providers. Keeptrusts serves as the governance layer that sits between your applications and LLM providers, providing a single control point regardless of deployment topology.

Use this page when

  • You are designing multi-cloud or multi-region AI gateway deployments
  • You need to abstract LLM provider dependencies behind a unified control point
  • You are enforcing data residency requirements across jurisdictions
  • You are planning disaster recovery and failover for AI infrastructure
  • You need to optimize AI costs across multiple providers and regions

Primary audience

  • Primary: Technical Engineers (Cloud Architects, Infrastructure Architects)
  • Secondary: DevOps Engineers, Platform Engineers, CTOs

Provider Abstraction Layer

Unified Gateway Architecture

Keeptrusts gateways abstract the underlying LLM provider, giving applications a single endpoint regardless of which models or providers are in use:

Application → Keeptrusts Gateway → Provider A (primary)
→ Provider B (failover)
→ Provider C (cost-optimized)

Multi-Provider Configuration

providers:
targets:
- id: openai
provider:
secret_key_ref:
env: OPENAI_API_KEY
- id: anthropic
provider:
secret_key_ref:
env: ANTHROPIC_API_KEY
- id: azure-openai
provider:
base_url: https://your-instance.openai.azure.com
secret_key_ref:
env: AZURE_OPENAI_API_KEY
policies:
- name: provider-governance
type: content-filter
categories:
- harmful
- biased
action: block
enabled: true
- name: cost-control
type: cost_limit
monthly_limit: 10000
action: block
enabled: true

Applications call the gateway at a single endpoint. The gateway handles provider selection, policy enforcement, and event logging transparently.

# Deploy the multi-provider gateway
kt gateway run \
--config multi-provider-policy.yaml \
--port 41002

# Verify provider connectivity
kt doctor

Deployment Topologies

Topology 1: Centralized Gateway

Best for organizations with a single cloud region and centralized governance:

┌─────────────────────────────────────────────┐
│ Cloud Region (e.g., eu-west-1) │
│ │
│ App A ──┐ │
│ App B ──┼── Keeptrusts Gateway ── LLM APIs │
│ App C ──┘ │ │
│ └── Control-Plane API │
└─────────────────────────────────────────────┘

Topology 2: Distributed Edge Gateways

Best for multi-region or latency-sensitive deployments:

┌──────────────────┐ ┌──────────────────┐
│ Region: US-East │ │ Region: EU-West │
│ App A ── GW A ──┤ │ App C ── GW C ──┤
│ App B ── GW B ──┤ │ App D ── GW D ──┤
└──────┬───────────┘ └──────┬───────────┘
│ │
└──── Control-Plane API ─┘
(centralized)

Each gateway runs kt gateway run with its own configuration but reports events to the central API. Manage all gateways from the Console Dashboard.

Topology 3: Kubernetes-Native

Deploy gateways as Kubernetes services alongside your applications:

# Validate the gateway configuration
kt policy lint --file k8s-gateway-policy.yaml

# Verify connectivity from within the cluster
kt doctor

The gateway runs as a sidecar or dedicated service, with policy configurations managed through ConfigMaps or Git-linked configurations synced via the Keeptrusts API.

Data Residency Controls

Enforcing Regional Data Boundaries

For organizations with data sovereignty requirements, deploy region-specific gateways with provider configurations that ensure data stays within jurisdiction:

providers:
targets:
- id: azure-openai-eu
provider:
base_url: https://eu-instance.openai.azure.com
secret_key_ref:
env: AZURE_OPENAI_EU_KEY
policies:
- name: eu-pii-protection
type: pii-detector
action: redact
entity_types:
- name
- email
- phone
- national_id
enabled: true
- name: eu-dlp-controls
type: dlp-filter
patterns:
- name: eu-personal-data
regex: '(IBAN|passport\s+number)'
action: block
enabled: true

Regional Gateway Mapping

RegionGatewayProviderData Residency
EU (Frankfurt)gw-eu-westAzure OpenAI EUEU only
US (Virginia)gw-us-eastOpenAI, AnthropicUS only
APAC (Singapore)gw-apacAzure OpenAI APACAPAC only

Network Topology

Gateway Network Position

The Keeptrusts gateway should be positioned in the network path between applications and external LLM APIs:

Internal Network │ External

App → Internal LB → KT Gateway ────┼──→ LLM Provider APIs
│ │
└── Control-Plane API │

Network Requirements

ComponentPortsProtocolDirection
Gateway (inbound)41002HTTPSApps → Gateway
Gateway (outbound)443HTTPSGateway → LLM APIs
Control-Plane API8080HTTPSGateway → API
Console3000HTTPSBrowser → Console

Security Considerations

  • Gateway keys (kt_gk_...) authenticate application traffic to the gateway
  • Bearer tokens authenticate gateway-to-API communication
  • All external traffic should use TLS
  • Network policies should restrict gateway egress to approved LLM provider endpoints only

Cost Optimization

Multi-Provider Cost Strategy

Route requests to cost-effective providers based on use case:

policies:
- name: cost-optimization
type: cost_limit
monthly_limit: 15000
action: block
enabled: true

Monitoring Spend Across Providers

Use the Console Cost Center to track spend across all providers and teams:

# Pull cost breakdown by provider
curl -H "Authorization: Bearer $API_TOKEN" \
"https://api.keeptrusts.com/v1/events?since=30d&group_by=provider"

# Export cost data for FinOps analysis
kt export create \
--type events \
--format csv \
--since 30d \
--description "Monthly cloud cost analysis"

Cost Optimization Checklist

  • Per-team budget caps configured in policy
  • Cost Center dashboards reviewed weekly
  • Model usage patterns analyzed for right-sizing
  • Unused gateway keys identified and rotated
  • Provider pricing changes tracked and configurations updated

Disaster Recovery

Gateway High Availability

Deploy multiple gateway instances behind a load balancer:

LB → Gateway Instance 1 (active)
→ Gateway Instance 2 (active)
→ Gateway Instance 3 (standby)

Each instance runs the same configuration. If one fails, the load balancer routes traffic to healthy instances.

Failover Strategy

Failure ModeImpactRecovery
Single gateway instance downMinimal — LB routes to healthy instancesAuto-recovery via health checks
Primary LLM provider outageService degradationFailover to secondary provider
Control-plane API unavailableNo new config changes; gateways continue with cached configAPI redundancy or manual config
Regional outageFull region lossCross-region gateway failover

DR Testing

# Verify gateway health
kt doctor

# Test provider failover by validating backup config
kt policy lint --file dr-failover-policy.yaml

Infrastructure as Code

Git-Linked Configuration

Store gateway configurations in Git and sync automatically through the Keeptrusts API:

  1. Store policy YAML in your infrastructure repository
  2. Link the repository in Console Settings
  3. Changes merged to the main branch are automatically synced to gateways
# Validate configuration before committing
kt policy lint --file policy-config.yaml

Cloud Architect Workflow with Keeptrusts

TaskFrequencyTool
Review gateway topology and healthWeeklyConsole Dashboard + kt doctor
Analyze cost distributionWeeklyConsole Cost Center
Validate data residency controlsMonthlyRegional gateway audit
DR failover testingQuarterlyFailover simulation
Provider configuration reviewQuarterlykt policy lint
Capacity planningQuarterlyEvent volume trends

Success Metrics for Cloud Architecture

MetricTargetSource
Gateway availability99.9% uptimeHealth check monitoring
Provider failover time< 30 secondsFailover event logs
Data residency compliance100% of requests in-regionEvent logs by gateway region
Cost efficiencyWithin 5% of budgetConsole Usage
Configuration driftZero unmanaged gatewaysConfiguration audit

For AI systems

  • Canonical terms: Keeptrusts, multi-cloud AI governance, provider abstraction, data residency, gateway topology, disaster recovery
  • Key surfaces: Console Dashboard, Console Settings (Git-linked repos), Console Usage, Events API
  • Commands: kt gateway run, kt policy lint, kt doctor, kt export create
  • Config concepts: multi-provider providers block with secret_key_ref, priority, base_url; regional gateway mapping; cost_limit policy; Kubernetes deployments; Git-linked configuration sync
  • Topologies: Centralized Gateway, Distributed Edge Gateways, Kubernetes-Native
  • Best next pages: DevOps Guide, Platform Engineer Guide, Architecture Overview, Gateway Configuration

For engineers

  • Deploy multi-provider gateway: kt gateway run --listen 0.0.0.0:41002 --policy-config multi-provider-policy.yaml
  • Validate configs per region: kt policy lint --file eu-gateway-policy.yaml
  • Verify connectivity: kt doctor
  • Track per-provider spend: GET /v1/events?since=30d&group_by=provider
  • Use Git-linked configurations in Console Settings for infrastructure-as-code policy management
  • Deploy multiple gateway instances behind a load balancer for HA; each instance runs the same config

For leaders

  • Provider abstraction through the gateway prevents vendor lock-in and enables competitive pricing negotiations across OpenAI, Anthropic, and Azure OpenAI
  • Data residency is enforced architecturally by deploying region-specific gateways with provider configurations that constrain data to specific jurisdictions
  • Multi-region gateway deployments provide disaster recovery with automatic failover to healthy instances
  • Console Usage gives FinOps visibility across all providers and regions for informed budget allocation

Next steps