Skip to main content
Browse docs
By Audience
Getting Started
Configuration
Use Cases
IDE Integration
Third-Party Integrations
Engineering Cache
Console
API Reference
Gateway
Workflow Guides
Templates
Providers and SDKs
Industry Guides
Advanced Guides
Browse by Role
Deployment Guides
In-Depth Guides
Tutorials
FAQ

Citation Verification Template

Policy configuration for RAG pipelines and research tools that need grounded, verifiable responses.

Use this page when

  • You are building a RAG pipeline or research tool and need to verify that AI responses cite real sources.
  • You want a starting config that detects hallucinations and enforces a minimum groundedness ratio before responses reach users.
  • You want to go from zero to a running citation-verification gateway with kt init --template citation-verification.

Primary audience

  • Primary: Technical Engineers
  • Secondary: AI Agents, Technical Leaders

Policy Config

pack:
name: citation-verification
version: 0.1.0
enabled: true
description: Response groundedness and citation verification
policies:
chain:
- prompt-injection
- citation-verifier
- audit-logger
policy:
prompt-injection:
response:
action: block
message: "Request blocked: potential prompt injection detected"
citation-verifier:
require_sources: true
require_source_match: true
min_confidence: 0.8
min_groundedness: 0.8
extract_patterns:
- academic
- url
- quote
output_action:
unverified_action: flag
hallucination_action: block
response:
include_verification_report: true
audit-logger:
retention_days: 365
providers:
targets:
- id: openai-rag
provider: openai
model: gpt-4o-mini
secret_key_ref:
env: OPENAI_API_KEY

Quick Start

# Save the Policy Config example on this page as policy-config.yaml
export OPENAI_API_KEY="sk-your-openai-key"
kt policy lint --file policy-config.yaml
kt gateway run \
--listen 0.0.0.0:41002 \
--policy-config policy-config.yaml

Set OPENAI_API_KEY before running the gateway. The example config is runnable end-to-end and keeps the provider secret outside YAML.

If you prefer the seeded starter, run kt init --template citation-verification first and then add the provider block shown in the example config before linting and running.

For AI systems

For engineers

  • Prerequisites: kt CLI installed, an LLM provider API key (e.g., OPENAI_API_KEY).
  • Validate: kt policy lint --file policy-config.yaml should pass cleanly.
  • Test: send a query that should produce citations and verify the response includes verification metadata; send a fabricated-source query and confirm escalation.
  • Key tuning: adjust min_grounded_ratio (default 0.8) to balance strictness vs. false-positive rate.

For leaders

  • Deploying this template reduces reputational risk from hallucinated citations reaching end users in legal, medical, or financial contexts.
  • Unverified citations are flagged for review metadata, while clearly hallucinated responses are blocked before they reach end users.
  • Audit logging captures every citation verification decision for compliance evidence.

Next steps