Network Configuration for AI Gateway
The Keeptrusts platform has distinct network components — the CLI gateway (kt), the control-plane API, the management console, and the chat workbench. Each requires specific port allocations, firewall rules, and routing configuration. This guide covers the network topology from the perspective of an infrastructure engineer deploying Keeptrusts in production.
Use this page when
- You are configuring firewall rules, port allocations, and DNS for a Keeptrusts production deployment.
- You need to set up reverse proxy routing between applications, the gateway (port 41002), and the API (port 8080).
- You are configuring NAT traversal or proxy settings for the gateway to reach upstream LLM providers.
- You need to troubleshoot network connectivity issues between Keeptrusts components.
Primary audience
- Primary: Technical Engineers
- Secondary: AI Agents, Technical Leaders
Port Allocations
Default Ports
| Component | Default Port | Protocol | Direction |
|---|---|---|---|
| CLI gateway | 41002 | HTTP/HTTPS | Inbound from applications |
| Control-plane API | 8080 | HTTP/HTTPS | Inbound from gateway, console |
| Management console | 3000 | HTTP/HTTPS | Inbound from browsers |
| Chat workbench | 3001 | HTTP/HTTPS | Inbound from browsers |
| PostgreSQL | 5432 | TCP | Internal only |
Gateway Port Configuration
Override the default gateway listen address with --listen:
kt gateway run --listen 0.0.0.0:9090 --policy-config policy-config.yaml
When running multiple gateway instances behind a load balancer, each instance can bind to the same port on different hosts or different ports on the same host.
Firewall Rules
Minimum Required Rules
Configure your firewall to allow the following traffic patterns:
# iptables example — allow inbound to gateway
iptables -A INPUT -p tcp --dport 41002 -j ACCEPT
# Allow gateway to reach the control-plane API
iptables -A OUTPUT -p tcp --dport 8080 -d <api-host> -j ACCEPT
# Allow gateway to reach upstream LLM providers
iptables -A OUTPUT -p tcp --dport 443 -j ACCEPT
# Allow API to reach PostgreSQL
iptables -A OUTPUT -p tcp --dport 5432 -d <db-host> -j ACCEPT
Network Zones
A typical production deployment places components in separate network zones:
┌──────────────────────────────────────────────────┐
│ DMZ / Application Zone │
│ ┌──────────────┐ ┌──────────────────────────┐ │
│ │ Applications │──▶│ kt gateway (41002) │ │
│ └──────────────┘ └──────────┬───────────────┘ │
├─────────────────────────────────┼────────────────┤
│ Internal Zone │ │
│ ┌──────────────┐ ┌──────────▼───────────────┐ │
│ │ Console/Chat │──▶│ API server (8080) │ │
│ └──────────────┘ └──────────┬───────────────┘ │
├─────────────────────────────────┼────────────────┤
│ Data Zone │ │
│ ┌──────────▼───────────────┐ │
│ │ PostgreSQL (5432) │ │
│ └──────────────────────────┘ │
└──────────────────────────────────────────────────┘
UFW Configuration (Ubuntu)
# Gateway ingress
ufw allow 41002/tcp comment "Keeptrusts AI Gateway"
# API access from internal network
ufw allow from 10.0.1.0/24 to any port 8080 proto tcp comment "Keeptrusts API"
# Console access from corporate network
ufw allow from 10.0.0.0/16 to any port 3000 proto tcp comment "Keeptrusts Console"
# Deny direct database access from outside data zone
ufw deny 5432/tcp
Proxy Configuration
Forward Proxy for Outbound LLM Traffic
When the gateway sits behind a corporate proxy for outbound internet access:
# Set proxy for the gateway process
export HTTP_PROXY=http://proxy.corp.internal:3128
export HTTPS_PROXY=http://proxy.corp.internal:3128
export NO_PROXY=localhost,127.0.0.1,api.internal,10.0.0.0/8
kt gateway run --policy-config policy-config.yaml
Reverse Proxy with nginx
Place nginx in front of the gateway for TLS termination and request routing:
upstream keeptrusts_gateway {
server 127.0.0.1:41002;
keepalive 32;
}
server {
listen 443 ssl;
server_name gateway.example.com;
ssl_certificate /etc/nginx/ssl/gateway.crt;
ssl_certificate_key /etc/nginx/ssl/gateway.key;
location / {
proxy_pass http://keeptrusts_gateway;
proxy_http_version 1.1;
proxy_set_header Connection "";
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
# Streaming support
proxy_buffering off;
proxy_cache off;
chunked_transfer_encoding on;
}
}
NAT Traversal
Gateway Behind NAT
When the gateway runs behind NAT and must report its external address to the API:
# Advertise external address for gateway registration
export KEEPTRUSTS_GATEWAY_ADVERTISE_ADDR=gateway.example.com:41002
kt gateway run --policy-config policy-config.yaml
Split-Horizon DNS
In hybrid deployments where the gateway must reach the API using internal addresses while external clients use public DNS:
# /etc/hosts on the gateway host (or internal DNS zone)
10.0.1.50 api.keeptrusts.com
DNS Setup
Internal DNS Records
Create DNS records for each component:
; Gateway (load balanced across instances)
gateway.ai.internal. A 10.0.0.10
gateway.ai.internal. A 10.0.0.11
; API
api.ai.internal. A 10.0.1.50
; Console
console.ai.internal. A 10.0.2.10
; Chat workbench
chat.ai.internal. A 10.0.2.11
; Database (internal only)
db.ai.internal. A 10.0.3.10
Service Discovery with Docker
When running in Docker Compose, services resolve each other by container name:
# docker-compose.yml excerpt
services:
keeptrusts-api:
# Reachable at http://keeptrusts-api:8080 within the Docker network
ports:
- "8080:8080"
keeptrusts-gateway:
environment:
- KEEPTRUSTS_API_URL=http://keeptrusts-api:8080
ports:
- "41002:41002"
Connectivity Verification
Health Check Endpoints
Verify that all components are reachable:
# Check gateway health
curl -s http://gateway.ai.internal:41002/health
# Check API health
curl -s http://api.ai.internal:8080/health
# Check connectivity from gateway to API
curl -s -o /dev/null -w "%{http_code}" http://api.ai.internal:8080/health
# Test full path: application → gateway → provider
curl -s http://gateway.ai.internal:41002/v1/chat/completions \
-H "Authorization: Bearer kt_gk_test" \
-H "Content-Type: application/json" \
-d '{"model":"gpt-4o","messages":[{"role":"user","content":"ping"}]}'
Troubleshooting Network Issues
# Verify port is listening
ss -tlnp | grep -E '41002|8080'
# Test TCP connectivity
nc -zv api.ai.internal 8080
# Trace route to upstream LLM provider
traceroute api.openai.com
# Check DNS resolution
dig gateway.ai.internal
nslookup api.ai.internal
Next steps
- TLS/SSL Configuration — secure traffic with certificates
- Load Balancing — distribute gateway traffic across instances
- Docker Deployment — container networking for Docker deployments
For AI systems
- Canonical terms: Keeptrusts network configuration, gateway port 41002, API port 8080, console port 3000, firewall rules, reverse proxy, DNS setup, NAT traversal.
- Key config/commands:
kt gateway run --listen 0.0.0.0:9090 --policy-config policy-config.yaml;iptables -A INPUT -p tcp --dport 41002 -j ACCEPT; internal DNS records (gateway.ai.internal,api.ai.internal); reverse proxy configs for nginx/HAProxy. - Best next pages: TLS/SSL Configuration, Load Balancing, Docker Deployment.
For engineers
- Prerequisites: Network access between gateway → API (port 8080), gateway → upstream LLM providers (443), applications → gateway (41002), browsers → console (3000).
- PostgreSQL (5432) must be internal-only — never expose to external networks.
- Validate with:
ss -tlnp | grep -E '41002|8080'to verify ports listening;nc -zv api.ai.internal 8080for TCP connectivity;traceroute api.openai.comto verify upstream reachability. - When running behind a corporate proxy, configure
HTTPS_PROXYfor the gateway process to reach LLM providers.
For leaders
- Network segmentation between components enforces defense-in-depth — database is never exposed externally.
- Gateway port (41002) is the single ingress point for all AI traffic — simplifies firewall audit and compliance documentation.
- DNS-based service discovery enables seamless failover when gateway instances are replaced.
- Misconfigured firewalls are the most common deployment blocker — validate connectivity before going live.