Skip to main content
Browse docs
By Audience
Getting Started
Configuration
Use Cases
IDE Integration
Third-Party Integrations
Engineering Cache
Console
API Reference
Gateway
Workflow Guides
Templates
Providers and SDKs
Industry Guides
Advanced Guides
Browse by Role
Deployment Guides
In-Depth Guides
Tutorials
FAQ

Operate Multiple Gateways from One CLI

As your AI governance footprint grows, you move from a single gateway to a fleet — separate gateways per team, environment, region, or compliance boundary. The kt CLI manages all of them from one workstation with consistent tooling.

Use this page when

  • You operate multiple gateways across teams, environments, or regions and need centralized management.
  • You want to sync configs, run rolling updates, or perform canary deployments across a gateway fleet.
  • You need fleet-wide health monitoring or diagnostic commands from a single CLI.

Primary audience

  • Primary: Platform Engineers and DevOps teams managing gateway fleets
  • Secondary: Technical Leaders planning multi-region or multi-team gateway topologies

Registering gateways

Each gateway registers with the control-plane API and receives a unique identity:

# List all registered gateways
kt gateway list

# Show detailed info for a specific gateway
kt gateway show gw-prod-01

# Filter by status
kt gateway list --status online
Registered Gateways
────────────────────
ID Name Status Version Config Region Last Seen
gw-prod-01 Production online 2.4.1 prod-v12 us-east-1 2s ago
gw-prod-02 Production-2 online 2.4.1 prod-v12 us-west-2 5s ago
gw-staging Staging online 2.5.0-rc staging-v3 us-east-1 3s ago
gw-eu-01 EU Production online 2.4.1 eu-prod-v8 eu-west-1 4s ago
gw-dev Development offline 2.4.0 dev-latest local 2h ago

Config sync across gateways

Push configuration updates to specific gateways or groups:

# Push config to a single gateway
kt config push --file policy-config.yaml --gateway gw-prod-01

# Push to all gateways in a group
kt config push --file policy-config.yaml --group production

# Push to all gateways matching a pattern
kt config push --file policy-config.yaml --gateway "gw-prod-*"

# Dry-run to see what would change
kt config push --file policy-config.yaml --group production --dry-run

Dry-run output

Config Push Preview (dry-run)
─────────────────────────────
Target: production group (3 gateways)

gw-prod-01 (us-east-1):
Current: prod-v12 (sha: abc123)
New: prod-v13 (sha: def456)
Changes: +1 policy (output-disclaimer), threshold change in prompt-injection-guard

gw-prod-02 (us-west-2):
Current: prod-v12 (sha: abc123)
New: prod-v13 (sha: def456)
Changes: same as gw-prod-01

gw-eu-01 (eu-west-1):
Current: eu-prod-v8 (sha: 789abc)
Skipped: gateway uses a different config source (eu-prod)

Apply? Use --confirm to execute.

Gateway groups

Organize gateways into logical groups for bulk operations:

# ~/.keeptrusts/gateway-groups.yaml
groups:
production:
gateways:
- gw-prod-01
- gw-prod-02
config_source: policies/production.yaml

staging:
gateways:
- gw-staging
config_source: policies/staging.yaml

eu:
gateways:
- gw-eu-01
config_source: policies/eu-production.yaml

all-prod:
gateways:
- gw-prod-01
- gw-prod-02
- gw-eu-01
# Operate on a group
kt gateway list --group production
kt doctor --group production
kt events tail --group all-prod

Centralized monitoring

Monitor all gateways from a single terminal:

# Stream events from all gateways
kt events tail --all-gateways

# Health overview of all gateways
kt gateway health

# Health for a specific group
kt gateway health --group production

Fleet health dashboard

Gateway Fleet Health
════════════════════
Requests/min Block Rate P50 Latency P99 Latency Status
gw-prod-01 1,247 2.3% 89ms 342ms ✓ healthy
gw-prod-02 1,189 2.1% 92ms 358ms ✓ healthy
gw-eu-01 634 3.1% 112ms 520ms ⚠ p99 high
gw-staging 12 8.3% 45ms 201ms ✓ healthy
gw-dev 0 — — — ✗ offline

Fleet totals: 3,082 req/min 2.4% avg block rate 94ms avg P50

Rolling updates

Deploy configuration changes progressively across your fleet:

# Rolling update across production gateways
kt config push --file policy-config.yaml --group production \
--strategy rolling \
--batch-size 1 \
--pause-between 60s \
--rollback-on-error

# Canary deployment: update one gateway, monitor, then proceed
kt config push --file policy-config.yaml --gateway gw-prod-01 \
--strategy canary \
--canary-duration 10m \
--success-threshold "block_rate<5%,p99<500ms"

Rolling update output

Rolling Config Update
─────────────────────
Strategy: rolling (batch size: 1, pause: 60s)
Group: production (3 gateways)

Batch 1/3: gw-prod-01
✓ Config pushed (prod-v12 → prod-v13)
✓ Health check passed (block rate: 2.1%, p99: 340ms)
⏳ Pausing 60s before next batch...

Batch 2/3: gw-prod-02
✓ Config pushed (prod-v12 → prod-v13)
✓ Health check passed (block rate: 2.3%, p99: 355ms)
⏳ Pausing 60s before next batch...

Batch 3/3: gw-eu-01
✓ Config pushed (eu-prod-v8 → prod-v13)
✓ Health check passed (block rate: 3.0%, p99: 510ms)

Rolling update complete. 3/3 gateways updated successfully.

Automatic rollback

If a health check fails during a rolling update, the CLI automatically rolls back the affected gateway:

Batch 2/3: gw-prod-02
✓ Config pushed (prod-v12 → prod-v13)
✗ Health check FAILED (block rate: 15.2% — threshold: <5%)
↩ Rolling back gw-prod-02 to prod-v12
↩ Rolling back gw-prod-01 to prod-v12
✗ Update aborted. All gateways restored to previous config.

Running diagnostics across the fleet

# Run kt doctor on all production gateways
kt doctor --group production

# Check upstream provider health across all gateways
kt doctor --group all-prod --checks upstream

# Generate a fleet-wide support bundle
kt doctor --group production --bundle --output fleet-diagnostics.tar.gz

Business outcomes

OutcomeHow multi-gateway management helps
Consistent policy enforcementConfig sync ensures all gateways run the same policies
Safe deploymentsRolling updates and canary strategies minimize blast radius
Regional complianceSeparate gateway groups for EU, US, and APAC with region-specific policies
Operational visibilityCentralized monitoring shows fleet-wide health at a glance
Reduced downtimeAutomatic rollback reverts bad configurations before users are impacted

For AI systems

  • Canonical terms: kt gateway list, kt gateway show, kt gateway health, kt config push, kt doctor --group, gateway groups, rolling update, canary deployment, automatic rollback.
  • Config: ~/.keeptrusts/gateway-groups.yaml defines groups with gateways and config_source.
  • Push flags: --gateway, --group, --strategy rolling|canary, --batch-size, --pause-between, --rollback-on-error, --dry-run.
  • Best next pages: Performance Tuning, Gateway Diagnostics, Gateway Docker Compose.

For engineers

  • Define gateway groups in ~/.keeptrusts/gateway-groups.yaml for bulk operations.
  • Dry-run before push: kt config push --file policy-config.yaml --group production --dry-run.
  • Rolling update: kt config push --group production --strategy rolling --batch-size 1 --rollback-on-error.
  • Fleet health: kt gateway health --group production or kt doctor --group production.
  • Centralized tail: kt events tail --all-gateways to stream events from the entire fleet.

For leaders

  • Rolling updates and canary deployments minimize blast radius — a bad config affects one gateway before automatic rollback.
  • Gateway groups enable regional compliance (EU, US, APAC) with region-specific policy overlays from one toolchain.
  • Centralized monitoring gives a single operational view across all governed AI traffic.
  • Reduced downtime: automatic rollback reverts bad configurations before end users are impacted.

Next steps