Skip to main content
Browse docs
By Audience
Getting Started
Configuration
Use Cases
IDE Integration
Third-Party Integrations
Engineering Cache
Console
API Reference
Gateway
Workflow Guides
Templates
Providers and SDKs
Industry Guides
Advanced Guides
Browse by Role
Deployment Guides
In-Depth Guides
Tutorials
FAQ

Golden Paths for AI Development

Golden paths are the paved roads your platform team builds so application developers can adopt AI safely and quickly. This guide covers defining standardized AI access patterns, packaging policy presets, creating getting-started CLI workflows, and generating internal documentation automatically.

Use this page when

  • You are defining the recommended, supported way for developers to adopt AI on your platform
  • You need to create tiered policy presets (exploration, internal, production) for different risk levels
  • You want to build SDK wrappers, getting-started CLI flows, and auto-generated internal documentation

Primary audience

  • Primary: Technical Engineers
  • Secondary: AI Agents, Technical Leaders

What Is a Golden Path?

A golden path is the recommended, supported, and tested way to accomplish a task on your platform. For AI adoption, this means:

  • One way to get an API key — through the self-service provisioning flow
  • One way to configure policies — through version-controlled YAML configs
  • One way to send requests — through the Keeptrusts gateway
  • One way to monitor usage — through the console dashboard

Deviations are possible but unsupported. The golden path removes ambiguity.

Standardized AI Access Patterns

The Gateway-First Pattern

All AI traffic flows through the Keeptrusts gateway. Direct calls to LLM providers are blocked at the network level:

# network-policy.yaml (Kubernetes NetworkPolicy)
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: block-direct-llm-access
namespace: applications
spec:
podSelector: {}
policyTypes:
- Egress
egress:
# Allow traffic to the Keeptrusts gateway
- to:
- namespaceSelector:
matchLabels:
name: keeptrusts
podSelector:
matchLabels:
app: keeptrusts-gateway
ports:
- port: 41002
# Block direct access to LLM provider APIs
# (allow DNS, internal services, etc.)
- to:
- ipBlock:
cidr: 10.0.0.0/8
ports:
- port: 53
protocol: UDP

SDK Wrapper

Provide a thin SDK wrapper that pre-configures the gateway endpoint:

# internal_ai_sdk/client.py
import os
from openai import OpenAI

def create_ai_client() -> OpenAI:
"""Create an OpenAI client routed through the Keeptrusts gateway."""
return OpenAI(
base_url=os.environ.get("KEEPTRUSTS_GATEWAY_URL", "http://keeptrusts-gateway:41002/v1"),
api_key=os.environ["KEEPTRUSTS_GATEWAY_TOKEN"],
)

Developers import this instead of configuring the OpenAI SDK directly:

from internal_ai_sdk.client import create_ai_client

client = create_ai_client()
response = client.chat.completions.create(
model="gpt-4o",
messages=[{"role": "user", "content": "Summarize this report."}],
)

Template Configs and Policy Presets

Tier-Based Presets

Define policy tiers that match your organization's risk appetite:

# presets/tier-1-exploration.yaml
keeptrusts:
preset:
name: "Tier 1 — Exploration"
description: "Lightweight guardrails for experimentation"
policies:
- name: basic-logging
type: event_logging
action: log
log_level: summary
- name: cost-cap
type: spend_limit
action: warn
max_daily_usd: 50
# presets/tier-2-internal.yaml
keeptrusts:
preset:
name: "Tier 2 — Internal Applications"
description: "Standard guardrails for internal-facing AI features"
policies:
- name: pii-redaction
type: output_filter
action: redact
patterns: ["SSN", "credit_card", "email"]
- name: prompt-injection-guard
type: input_filter
action: block
detection: prompt_injection
sensitivity: medium
- name: full-logging
type: event_logging
action: log
log_level: full
- name: cost-cap
type: spend_limit
action: block
max_daily_usd: 200
# presets/tier-3-customer-facing.yaml
keeptrusts:
preset:
name: "Tier 3 — Customer-Facing"
description: "Strict guardrails for external AI features"
policies:
- name: pii-redaction
type: output_filter
action: redact
patterns: ["SSN", "credit_card", "email", "phone_number", "address"]
- name: prompt-injection-guard
type: input_filter
action: block
detection: prompt_injection
sensitivity: high
- name: content-safety
type: output_filter
action: block
categories: ["hate_speech", "violence", "self_harm"]
- name: disclaimer
type: output_modifier
action: append
text: "This is AI-generated content and may contain errors."
- name: full-audit
type: event_logging
action: log
log_level: full
retention_days: 365
- name: cost-cap
type: spend_limit
action: block
max_daily_usd: 1000

Preset Selection via CLI

Developers choose a preset during project initialization:

# Initialize a new AI project with Tier 2 guardrails
kt config init --preset tier-2-internal --output policy-config.yaml

# Validate the generated config
kt policy lint --file policy-config.yaml

# Preview the policy chain
kt config show --config policy-config.yaml

Getting-Started CLI Commands

Project Bootstrap Sequence

Document a reproducible bootstrap sequence that every team follows:

# 1. Install the CLI
curl -fsSL https://get.keeptrusts.dev/cli | sh

# 2. Authenticate with the platform
kt auth login --api-url https://keeptrusts-api.internal:8080

# 3. Initialize project config from a preset
kt config init --preset tier-2-internal --output policy-config.yaml

# 4. Validate the config
kt policy lint --file policy-config.yaml

# 5. Start a local gateway for development
kt gateway run --policy-config policy-config.yaml --port 41002

# 6. Test with a sample request
curl -X POST http://localhost:41002/v1/chat/completions \
-H "Authorization: Bearer ${KEEPTRUSTS_GATEWAY_TOKEN}" \
-H "Content-Type: application/json" \
-d '{"model":"gpt-4o","messages":[{"role":"user","content":"Hello"}]}'

CLI Cheat Sheet

Distribute a cheat sheet to developers:

TaskCommand
Validate configkt policy lint --file policy-config.yaml
Start local gatewaykt gateway run --policy-config policy-config.yaml
Tail live eventskt events tail --gateway my-gateway
Export eventskt events export --format csv --days 7
Check gateway healthcurl http://localhost:41002/health

Internal Documentation Generation

Auto-Generated Policy Docs

Parse policy configs to generate human-readable documentation:

#!/usr/bin/env bash
set -euo pipefail

CONFIG="${1:?Usage: gen-policy-docs.sh <config-file>}"
OUTPUT_DIR="docs/ai-policies"
mkdir -p "${OUTPUT_DIR}"

echo "# AI Policy Documentation" > "${OUTPUT_DIR}/index.md"
echo "" >> "${OUTPUT_DIR}/index.md"
echo "Generated from \`${CONFIG}\` on $(date -u +%Y-%m-%dT%H:%M:%SZ)" >> "${OUTPUT_DIR}/index.md"
echo "" >> "${OUTPUT_DIR}/index.md"

# Extract policy names and types
yq -r '.keeptrusts.policies[] | "## \(.name)\n\n- **Type:** \(.type)\n- **Action:** \(.action)\n"' \
"${CONFIG}" >> "${OUTPUT_DIR}/index.md"

echo "Documentation generated at ${OUTPUT_DIR}/index.md"

Living Documentation in CI

Regenerate docs on every policy change and publish to your internal wiki:

# .github/workflows/policy-docs.yml
name: Policy Documentation
on:
push:
paths:
- 'policy-config.yaml'
- 'presets/**'

jobs:
generate:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4

- name: Generate policy docs
run: ./scripts/gen-policy-docs.sh policy-config.yaml

- name: Publish to wiki
run: |
cp docs/ai-policies/index.md ../wiki/ai-policies.md
cd ../wiki && git add . && git commit -m "Update AI policy docs" && git push

Golden paths succeed when developers never need to ask "how do I use AI here?" — the answer is already in the tooling, templates, and docs your platform provides.

For AI systems

  • Canonical terms: golden path, gateway-first pattern, policy presets, tier-based presets, SDK wrapper, NetworkPolicy, kt init, template configs
  • Key patterns: all AI traffic routes through gateway (port 41002), direct LLM provider access blocked via Kubernetes NetworkPolicy
  • Preset tiers: Tier 1 (exploration, $50/day), Tier 2 (internal, $200/day), Tier 3 (production, strict DLP + escalation)
  • Related pages: Self-Service Portal, Config-as-Code, Internal Developer Platform

For engineers

  • Implement the gateway-first pattern by blocking direct egress to LLM providers with Kubernetes NetworkPolicy
  • Build a thin SDK wrapper (e.g. create_ai_client()) that pre-configures base_url to the gateway and reads KEEPTRUSTS_GATEWAY_TOKEN from the environment
  • Define tiered policy presets in YAML (exploration, internal, production) with increasing guardrail severity
  • Use kt init to scaffold new team configs from a preset template
  • Auto-generate internal policy documentation from YAML configs using a CI script
  • Validate: confirm direct LLM API calls are denied by NetworkPolicy while gateway-routed calls succeed

For leaders

  • Golden paths reduce time-to-AI-adoption by eliminating ambiguity — one supported way per task
  • Tiered presets align governance strictness to actual risk level (experimentation vs. production)
  • Blocking direct LLM access at the network level guarantees 100% policy enforcement coverage
  • SDK wrappers make governance invisible to developers — they use the standard OpenAI SDK with a different base URL
  • Self-service documentation reduces support load on the platform team

Next steps