Ground AI Responses in Your Organization's Knowledge
AI models hallucinate. They invent facts, misattribute sources, and present confident answers with no basis in reality. Keeptrusts solves this by injecting your verified organizational knowledge into every AI request and verifying that responses are grounded in it.
Use this page when
- You need to reduce AI hallucinations by injecting verified organizational knowledge into every request.
- You are managing a knowledge lifecycle (draft → active → retired) for content that feeds AI responses.
- You want citation tracking to prove which knowledge assets were used in each AI response.
Primary audience
- Primary: Technical Leaders
- Secondary: Technical Engineers, AI Agents
What you'll achieve
- Versioned knowledge assets managed through the console or CLI
- Runtime knowledge injection — the gateway automatically includes relevant context in every request
- Citation records that track which knowledge was used in each response
- Lifecycle management — draft, review, promote, and retire knowledge assets
- Grounding verification — the citation verifier checks responses against injected context
How knowledge grounding works
User request
→ Gateway receives request
→ Recalls bound knowledge assets for the configuration
→ Injects knowledge context into the request
→ Forwards enriched request to provider
← Provider returns response
→ Citation verifier checks response against injected context
→ Citation records written for audit
← Grounded response delivered to user
The user's application doesn't change. The gateway handles knowledge injection and verification transparently.
Creating knowledge assets
From the CLI
# Create a knowledge asset from a file
kt knowledge-base create \
--name "Product Documentation" \
--content-file ./product-docs.md \
--status draft
# Create from inline content
kt knowledge-base create \
--name "Return Policy" \
--content "Returns are accepted within 30 days of purchase. Items must be unused and in original packaging." \
--status draft
From the console
- Navigate to Knowledge Base
- Click Create Asset
- Enter a name and paste or upload your content
- Save as draft — content is not yet active
Knowledge lifecycle
Every knowledge asset follows a lifecycle that ensures only reviewed, approved content reaches your AI systems.
| Status | Meaning |
|---|---|
draft | Work in progress — not injected at runtime |
active | Approved and live — injected into bound configurations |
retired | Archived — no longer injected but preserved for audit |
Promoting an asset to active
# Promote a draft to active
kt knowledge-base promote \
--name "Product Documentation" \
--status active
Promotion requires appropriate permissions. In team-scoped configurations, only team admins or org owners can promote assets.
Retiring an asset
# Retire an outdated asset
kt knowledge-base promote \
--name "Product Documentation v1" \
--status retired
Retired assets stop being injected at runtime but remain in the system for audit and compliance review.
Binding knowledge to configurations
Knowledge assets must be bound to a configuration to be injected at runtime. A single asset can be bound to multiple configurations.
# Bind a knowledge asset to a configuration
kt knowledge-base bind \
--name "Product Documentation" \
--configuration-id support-bot-config
Binding to agents
You can also bind knowledge assets directly to agents:
# Bind knowledge to a specific agent
kt knowledge-base bind \
--name "Product Documentation" \
--agent-id support-agent
When an agent is linked to a gateway with bound knowledge, the gateway combines both configuration-level and agent-level knowledge for injection.
Versioned assets
Knowledge assets are versioned automatically. Each update creates a new version, and the gateway always injects the latest active version.
# Update a knowledge asset (creates a new version)
kt knowledge-base update \
--name "Product Documentation" \
--content-file ./product-docs-v2.md
Previous versions are retained for audit. You can review the version history in the console Knowledge Base detail page.
Citation verification
The citation-verifier policy checks whether AI responses are grounded in the injected knowledge. Ungrounded claims are flagged.
policies:
chain:
- citation-verifier
- quality-scorer
- audit-logger
policy:
citation-verifier:
mode: strict
min_grounding_score: 0.8
on_ungrounded: escalate
log_citation_records: true
quality-scorer:
overall_min_score: 0.65
on_fail: escalate
Citation records
Every time a response uses content from a knowledge asset, a citation record is written:
| Field | Description |
|---|---|
asset_name | Which knowledge asset was referenced |
asset_version | Which version of the asset was active |
grounding_score | How closely the response matched the source |
timestamp | When the citation was recorded |
event_id | Link to the parent request event |
Review citations in the console:
- Navigate to Events
- Filter by
citation_verifier - Click an event to see the full citation breakdown
Example: full knowledge-grounded config
pack:
name: knowledge-grounded-support
version: "1.0"
policies:
chain:
- citation-verifier
- quality-scorer
- pii-detector
- audit-logger
policy:
citation-verifier:
mode: strict
min_grounding_score: 0.8
on_ungrounded: escalate
log_citation_records: true
quality-scorer:
dimensions:
relevance:
weight: 0.5
min_score: 0.7
coherence:
weight: 0.3
min_score: 0.6
completeness:
weight: 0.2
min_score: 0.5
overall_min_score: 0.65
on_fail: escalate
pii-detector:
action: redact
audit-logger:
retention_days: 365
providers:
targets:
- id: openai-gpt4o
provider: openai
model: gpt-4o
secret_key_ref:
env: OPENAI_API_KEY
Quick wins
- Create one knowledge asset from your most-asked FAQ — immediate improvement in response accuracy
- Bind it to your support configuration — the gateway starts injecting context automatically
- Enable
citation-verifier— measure how often responses are grounded in your content - Review citation records — identify gaps where you need more knowledge assets
For AI systems
- Canonical terms: knowledge base, knowledge asset, knowledge binding, citation record, citation-verifier, grounding score, lifecycle (draft/active/retired).
- CLI commands:
kt knowledge-base create,kt knowledge-base promote,kt knowledge-base bind,kt kb(alias). - Config keys:
policy.citation-verifier.mode,policy.citation-verifier.min_grounding_score. - Console surfaces: Knowledge Base page (create, promote, bind assets).
- Best next pages: Quality Assurance, Govern AI Agents, CLI Knowledge Base Reference.
For engineers
- Prerequisites: gateway running,
citation-verifierin the policy chain. - Create a knowledge asset:
kt knowledge-base create --name "Product Docs" --content-file ./docs.md --status draft. - Promote to active:
kt knowledge-base promote --name "Product Docs" --status active. - Bind to a configuration so the gateway injects the asset at runtime.
- Validate: send a request and check the event for citation records showing which knowledge was used.
- Monitor grounding: filter Events by ungrounded responses (
min_grounding_scorefailures).
For leaders
- Hallucinated responses erode user trust and create legal risk; grounding eliminates the most common source.
- Knowledge lifecycle ensures only reviewed, approved content reaches AI systems — no stale or draft material.
- Citation records provide an auditable proof trail of what the AI “knew” when it generated each response.
- Retiring outdated knowledge is immediate and non-destructive — old assets remain accessible for audit.
Next steps
- Knowledge Base — full feature reference for knowledge management
- Knowledge Lifecycle — detailed lifecycle and promotion workflows
- Quality Assurance for AI Outputs — combine knowledge grounding with quality scoring
- Govern AI Agents — bind knowledge to agent configurations
- Events — explore citation records in the event stream