Air-Gapped & Offline Deployment
Organizations in defense, government, healthcare, and critical infrastructure often require air-gapped environments with no internet connectivity. Keeptrusts supports fully offline deployment with local model endpoints, pre-loaded container images, and self-contained policy enforcement.
Use this page when
- You must deploy Keeptrusts in an environment with no internet connectivity (defense, government, classified networks).
- You need to transfer container images offline and configure self-hosted LLM endpoints (vLLM, Ollama).
- Policy enforcement must work entirely on the isolated network without external API calls.
- You need to verify that no container can reach external networks after deployment.
Primary audience
- Primary: Technical Engineers
- Secondary: AI Agents, Technical Leaders
Architecture Overview
┌─────────────────────────────────────────────────────┐
│ Air-Gapped Network │
│ │
│ ┌──────────┐ ┌─────────┐ ┌───────────────────┐ │
│ │ App/User │─▶│ Gateway │─▶│ Self-Hosted LLM │ │
│ └──────────┘ │ (kt) │ │ (vLLM / Ollama) │ │
│ └────┬────┘ └───────────────────┘ │
│ │ │
│ ┌────▼────┐ │
│ │ API │ │
│ │ Server │ │
│ └────┬────┘ │
│ ┌────▼────┐ ┌──────────────────┐ │
│ │Postgres │ │ Console / Admin │ │
│ └─────────┘ └──────────────────┘ │
│ │
│ No external network connectivity │
└─────────────────────────────────────────────────────┘
All components — gateway, API, database, console, admin, and model inference — run on the isolated network.
Transferring Container Images
Export Images on a Connected Machine
# Pull and save all required images
docker pull keeptrusts/api:2.5.0
docker pull keeptrusts/gateway:2.5.0
docker pull keeptrusts/console:2.5.0
docker pull keeptrusts/admin:2.5.0
docker pull postgres:16-alpine
# Save to a single archive
docker save \
keeptrusts/api:2.5.0 \
keeptrusts/gateway:2.5.0 \
keeptrusts/console:2.5.0 \
keeptrusts/admin:2.5.0 \
postgres:16-alpine \
| gzip > keeptrusts-stack-2.5.0.tar.gz
# Verify archive
ls -lh keeptrusts-stack-2.5.0.tar.gz
Transfer to Air-Gapped Environment
Transfer the archive via approved media (USB, optical disc, cross-domain solution):
# On the air-gapped machine — load images
gunzip -c keeptrusts-stack-2.5.0.tar.gz | docker load
# Verify images are available
docker images | grep keeptrusts
Private Registry (Optional)
For larger deployments, run a local Docker registry:
# docker-compose.registry.yml
services:
registry:
image: registry:2
ports:
- "5000:5000"
volumes:
- registry-data:/var/lib/registry
restart: unless-stopped
volumes:
registry-data:
# Tag and push to local registry
docker tag keeptrusts/api:2.5.0 registry.internal:5000/keeptrusts/api:2.5.0
docker push registry.internal:5000/keeptrusts/api:2.5.0
# Repeat for all images
for img in api gateway console admin; do
docker tag "keeptrusts/$img:2.5.0" "registry.internal:5000/keeptrusts/$img:2.5.0"
docker push "registry.internal:5000/keeptrusts/$img:2.5.0"
done
Self-Hosted Model Endpoints
Ollama
# Install Ollama (on the connected machine, then transfer the binary)
# On the air-gapped machine:
ollama serve &
# Pre-load models (transfer model files via approved media)
ollama pull llama3.1:8b # do this on connected machine
# Copy ~/.ollama/models/ to air-gapped machine
# Configure gateway to use local Ollama
# policy-config.yaml
# policy-config.yaml — self-hosted provider
target:
provider: ollama
url: http://ollama.internal:11434
model: llama3.1:8b
policies:
- name: content-filter
type: content_filter
action: block
patterns:
- classified_data
vLLM
# Transfer vLLM Docker image
docker save vllm/vllm-openai:latest | gzip > vllm.tar.gz
# Transfer to air-gapped environment
docker load < vllm.tar.gz
# Run vLLM with a local model
docker run -d \
--gpus all \
-v /models/llama-3.1-8b:/model \
-p 8000:8000 \
vllm/vllm-openai \
--model /model \
--served-model-name llama3.1
Gateway Configuration for Local Models
target:
provider: openai-compatible
url: http://vllm.internal:8000
model: llama3.1
providers:
targets:
- id: local-llm
provider:
base_url: http://vllm.internal:8000/v1
secret_key_ref:
env: LOCAL_LLM_KEY
policies:
- name: pii-filter
type: content_filter
action: redact
patterns:
- ssn
- credit_card
- email
- name: classification-guard
type: content_filter
action: block
keywords:
- TOP SECRET
- CLASSIFIED
No-Internet Policy Enforcement
Block All External Network Access
Configure the gateway's policy chain to enforce that no data leaves the air-gapped network:
# policy-config.yaml — strict no-internet policy
target:
provider: openai-compatible
url: http://vllm.internal:8000
model: llama3.1
policies:
- name: no-external-urls
type: content_filter
action: block
description: "Block any attempt to reference external URLs"
patterns:
- url_pattern
- name: data-classification
type: content_filter
action: block
keywords:
- "UNCLASSIFIED//FOUO"
- "CUI"
Network-Level Enforcement
# iptables — block all outbound except local network
iptables -A OUTPUT -d 10.0.0.0/8 -j ACCEPT
iptables -A OUTPUT -d 172.16.0.0/12 -j ACCEPT
iptables -A OUTPUT -d 192.168.0.0/16 -j ACCEPT
iptables -A OUTPUT -o lo -j ACCEPT
iptables -A OUTPUT -j DROP
# Verify no external connectivity
curl -s --connect-timeout 5 https://api.openai.com && echo "FAIL: External access detected" || echo "OK: No external access"
Local Knowledge Base
Pre-Loading Knowledge Assets
Transfer knowledge base files to the air-gapped environment and load them via the API:
# On the connected machine — export knowledge assets
curl -o knowledge-export.json \
http://api.external:8080/v1/knowledge-base/export \
-H "Authorization: Bearer $API_TOKEN"
# Transfer to air-gapped environment
# On the air-gapped machine — import knowledge assets
curl -X POST http://api.internal:8080/v1/knowledge-base/import \
-H "Authorization: Bearer $API_TOKEN" \
-H "Content-Type: application/json" \
-d @knowledge-export.json
Knowledge Base Bound to Gateway
# policy-config.yaml — bind local knowledge base
knowledge_base:
assets:
- name: company-policies
path: /etc/keeptrusts/knowledge/policies.md
- name: compliance-guidelines
path: /etc/keeptrusts/knowledge/compliance.md
Docker Compose for Air-Gapped
# docker-compose.airgap.yml
services:
keeptrusts-api:
image: keeptrusts/api:2.5.0
restart: unless-stopped
ports:
- "8080:8080"
environment:
- DATABASE_URL=postgres://keeptrusts:${DB_PASSWORD}@postgres:5432/keeptrusts
- KEEPTRUSTS_JWT_SECRET=${JWT_SECRET}
- KEEPTRUSTS_SECRET_ENCRYPTION_KEY=${ENCRYPTION_KEY}
depends_on:
postgres:
condition: service_healthy
keeptrusts-gateway:
image: keeptrusts/gateway:2.5.0
restart: unless-stopped
ports:
- "41002:41002"
environment:
- KEEPTRUSTS_API_URL=http://keeptrusts-api:8080
volumes:
- ./policy-config.yaml:/etc/keeptrusts/policy-config.yaml:ro
- ./knowledge:/etc/keeptrusts/knowledge:ro
keeptrusts-console:
image: keeptrusts/console:2.5.0
restart: unless-stopped
ports:
- "3000:3000"
environment:
- KEEPTRUSTS_API_URL=http://keeptrusts-api:8080
postgres:
image: postgres:16-alpine
restart: unless-stopped
volumes:
- pgdata:/var/lib/postgresql/data
environment:
- POSTGRES_DB=keeptrusts
- POSTGRES_USER=keeptrusts
- POSTGRES_PASSWORD=${DB_PASSWORD}
healthcheck:
test: ["CMD-SHELL", "pg_isready -U keeptrusts"]
interval: 10s
retries: 5
local-llm:
image: vllm/vllm-openai:latest
runtime: nvidia
volumes:
- /models/llama-3.1-8b:/model
ports:
- "8000:8000"
command: ["--model", "/model", "--served-model-name", "llama3.1"]
volumes:
pgdata:
# Start the air-gapped stack
docker compose -f docker-compose.airgap.yml up -d
# Verify all services
docker compose -f docker-compose.airgap.yml ps
Upgrade Procedure for Air-Gapped
# On connected machine — prepare upgrade bundle
docker pull keeptrusts/api:2.6.0
docker pull keeptrusts/gateway:2.6.0
docker pull keeptrusts/console:2.6.0
docker pull keeptrusts/chat:2.6.0
docker save \
keeptrusts/api:2.6.0 \
keeptrusts/gateway:2.6.0 \
keeptrusts/console:2.6.0 \
keeptrusts/chat:2.6.0 \
| gzip > keeptrusts-upgrade-2.6.0.tar.gz
# Include release notes and migration guide
tar czf keeptrusts-upgrade-bundle-2.6.0.tar.gz \
keeptrusts-upgrade-2.6.0.tar.gz \
RELEASE-NOTES-2.6.0.md \
MIGRATION-GUIDE.md
# Transfer to air-gapped environment via approved media
# On air-gapped machine
gunzip -c keeptrusts-upgrade-2.6.0.tar.gz | docker load
# Follow standard upgrade procedure (backup first)
docker compose -f docker-compose.airgap.yml down
docker compose -f docker-compose.airgap.yml up -d
Verification
# Verify no external connectivity from any container
for svc in keeptrusts-api keeptrusts-gateway keeptrusts-console; do
echo -n "$svc external access: "
docker compose exec "$svc" curl -s --connect-timeout 3 https://api.openai.com > /dev/null 2>&1 \
&& echo "FAIL" || echo "BLOCKED (OK)"
done
# Verify internal connectivity
curl -s http://localhost:8080/health | jq .
curl -s http://localhost:41002/health | jq .
curl -s http://localhost:3000/ -o /dev/null -w "%{http_code}"
# Test policy enforcement with local model
curl -s http://localhost:41002/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{"model":"llama3.1","messages":[{"role":"user","content":"hello"}]}'
Next steps
- Security Hardening — additional controls for classified environments
- Backup & Recovery — offline backup procedures
- Capacity Sizing — resource planning for on-premises hardware
For AI systems
- Canonical terms: Keeptrusts air-gapped deployment, offline deployment, self-hosted LLM, container image transfer, isolated network, no-internet policy enforcement.
- Key config/commands:
docker save/docker loadfor image transfer;kt gateway runwith local model endpoints (vLLM on port 8000, Ollama on 11434);docker composewith all services on isolated network; connectivity verification scripts. - Best next pages: Security Hardening, Backup & Recovery, Capacity Sizing.
For engineers
- Prerequisites: Connected machine for image export; transfer media (USB, DVD, cross-domain solution); target hardware meeting capacity requirements; self-hosted LLM runtime (vLLM or Ollama).
- Export images on connected machine with
docker save, transfer via approved media, load withdocker loadon air-gapped hosts. - Validate with:
curl -s http://localhost:8080/health,curl -s http://localhost:41002/health, and the external connectivity test loop (should print "BLOCKED (OK)" for all containers). - All knowledge base assets and policy configs must be pre-loaded before disconnection — no runtime downloads.
For leaders
- Enables AI governance in classified environments (defense, intelligence, critical infrastructure) that prohibit internet access.
- All components run on-premises: no data leaves the network, no dependency on cloud LLM providers.
- Requires upfront hardware investment (see Capacity Sizing) and self-hosted model infrastructure.
- Update cadence is manual — new versions require physical media transfer through security review.