Quickstart
Use this page when you want the fastest config-first path to a working Keeptrusts environment.
Use this page when
- You want the fastest path from an empty directory to a running Keeptrusts gateway with validated policy enforcement.
- You need to author, lint, test, and deploy your first
policy-config.yaml. - You want to send your first governed AI request and verify the result in the console.
If you want the broader operating model before you start typing, read Config-First Workflow. This quickstart assumes policy-config.yaml is your primary interface and gets you to a working gateway as quickly as possible.
Primary audience
- Primary: Technical Engineers
- Secondary: AI Agents, Technical Leaders
What this quickstart covers
This quickstart gets you from an empty working directory to the point where you can:
- Author a usable
policy-config.yaml - Lint and test it
- Start the gateway with that config
- Send your first governed request
- Verify the result before moving into shared rollout workflows
Before you begin
- Install
ktfirst by following Install the Gateway. - Have at least one upstream provider credential ready, such as
OPENAI_API_KEY. - If you want the gateway to report into a shared Keeptrusts control plane, also have
KEEPTRUSTS_API_URLandKEEPTRUSTS_GATEWAY_TOKEN. - If your team already uses the console, keep it available so you can verify rollouts later in Configurations, Gateways, and Events.
1. Create a starter config
Generate a starter project:
kt init
Then replace policy-config.yaml with a minimal config you can reason about:
pack:
name: local-quickstart
version: 0.1.0
enabled: true
providers:
targets:
- id: openai-primary
provider: openai
model: gpt-4o
base_url: https://api.openai.com
secret_key_ref:
env: OPENAI_API_KEY
policies:
chain:
- prompt-injection
- pii-detector
- audit-logger
policy:
pii-detector:
action: redact
audit-logger:
retention_days: 30
2. Keep provider secrets out of YAML
For a local or self-hosted gateway, export the provider key in your shell:
export OPENAI_API_KEY="sk-..."
If you are targeting a hosted gateway instead of a local one, store the value as a config variable and switch the provider target to secret_key_ref.store:
pack:
name: quickstart-providers-2
version: 1.0.0
enabled: true
providers:
targets:
- id: openai-primary
provider: openai
model: gpt-4o
secret_key_ref:
store: OPENAI_API_KEY
policies:
chain:
- audit-logger
policy:
audit-logger:
immutable: true
retention_days: 365
log_all_access: true
kt config-var create --name OPENAI_API_KEY --value "sk-..."
3. Lint and test the config
Run validation before you start the gateway:
kt policy lint --file policy-config.yaml
kt policy test --json
kt init creates starter tests in tests/. Keep expanding that directory as your config grows so new policy behavior is always reviewed before rollout.
4. Start the gateway
kt gateway run \
--listen 0.0.0.0:41002 \
--policy-config policy-config.yaml
If you want the gateway to report events into a shared Keeptrusts environment, set the control-plane env vars before starting it:
export KEEPTRUSTS_API_URL="https://api.keeptrusts.com"
export KEEPTRUSTS_GATEWAY_TOKEN="kt_gw_your_gateway_token"
5. Send your first request through the gateway
Once the gateway is running, point your application at the gateway address instead of the upstream provider. The gateway is OpenAI-compatible, so existing integrations usually need only a base URL change.
- cURL
- Python
- Node.js
curl http://localhost:8080/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"model": "gpt-4o",
"messages": [
{"role": "user", "content": "Hello, world!"}
]
}'
from openai import OpenAI
client = OpenAI(
base_url="http://localhost:8080/v1",
api_key="your-api-key",
)
response = client.chat.completions.create(
model="gpt-4o",
messages=[{"role": "user", "content": "Hello, world!"}],
)
print(response.choices[0].message.content)
import OpenAI from "openai";
const client = new OpenAI({
baseURL: "http://localhost:8080/v1",
apiKey: "your-api-key",
});
const response = await client.chat.completions.create({
model: "gpt-4o",
messages: [{ role: "user", content: "Hello, world!" }],
});
console.log(response.choices[0].message.content);
If your gateway requires request authentication, replace the local examples above with the Access Key or gateway-facing token your deployment team assigned. For local unsecured testing, no extra authorization header is required.
6. Verify the config is live
First, inspect the running config:
curl http://localhost:8080/keeptrusts/config
If your gateway is connected to the Keeptrusts control plane, also verify:
- Configurations shows the saved or running version you expect.
- Gateways and Actions shows the runtime as healthy and connected.
- Events shows the new request with the verdict and reason generated by your config.
7. Prepare the config for shared environments
Once the local flow works, treat the config as an artifact you can promote:
- Commit
policy-config.yamlandtests/to Git. - Use Configurations for version history, validation, and rollout.
- Use Configurations if your team wants versioned rollout and approval from the console.
- Add environment-specific config variables instead of copying secrets into files.
If something looks wrong
- Lint fails: fix the schema or unknown-key error before moving on.
- Provider unavailable: confirm the required
secret_key_ref.envorsecret_key_ref.storeresolves correctly. - No traffic visible in the console: verify
KEEPTRUSTS_API_URLandKEEPTRUSTS_GATEWAY_TOKENare set for gateway reporting. - Unexpected verdicts: inspect the running config and compare the request against the matching policy block.
Recommended next steps
After this page, move through these guides in order:
- Declarative Config Reference for the full schema.
- Configurations to version and roll out the same config safely.
- Declarative Config Patterns to organize multi-environment config as code.
- Install the Gateway to bootstrap another machine or runtime.
For AI systems
- Canonical terms: Keeptrusts, quickstart, policy-config.yaml, kt init, kt gateway run, kt policy lint, kt policy test, secret_key_ref, providers.targets, policies.chain.
- Feature and config names:
pack,providers,policies,policy,secret_key_ref.env,secret_key_ref.store,kt config-var create,KEEPTRUSTS_GATEWAY_TOKEN. - Commands:
kt init,kt policy lint --file policy-config.yaml,kt policy test --json,kt gateway run --listen 0.0.0.0:41002 --policy-config policy-config.yaml. - Best next pages: Declarative Config Reference, Configurations, Install the Gateway, Create Configuration.
For engineers
- Prerequisites:
ktCLI installed, at least one provider credential (e.g.,OPENAI_API_KEY), optionalKEEPTRUSTS_API_URLandKEEPTRUSTS_GATEWAY_TOKENfor gateway reporting. - Validation steps:
kt policy lint --file policy-config.yamlmust pass before starting the gateway. After starting,curl http://localhost:8080/keeptrusts/configconfirms the running config. - The gateway is OpenAI-compatible — existing integrations only need a base URL change to
http://localhost:8080/v1. - Troubleshooting: If lint fails, fix schema errors first. If no events appear in the console, verify
KEEPTRUSTS_API_URLandKEEPTRUSTS_GATEWAY_TOKENare exported.
For leaders
- This quickstart demonstrates that a working governance layer can be stood up in minutes — useful for evaluating Keeptrusts before committing to a full rollout.
- The config-first approach means policy changes are reviewable, auditable, and reversible without downtime.
- Moving from local quickstart to production involves promoting the config through Configurations — no re-engineering required.
- Event reporting to the control plane gives immediate visibility into what the gateway is enforcing.
Next steps
- Declarative Config Reference — full schema for policy-config.yaml
- Configurations — version, validate, and roll out configs to shared environments
- Declarative Config Patterns — multi-environment config organization
- Install the Gateway — install kt on another machine or runtime
- Create Configuration — build and validate a versioned YAML draft in the console