Skip to main content
Browse docs
By Audience
Getting Started
Configuration
Use Cases
IDE Integration
Third-Party Integrations
Engineering Cache
Console
API Reference
Gateway
Workflow Guides
Templates
Providers and SDKs
Industry Guides
Advanced Guides
Browse by Role
Deployment Guides
In-Depth Guides
Tutorials
FAQ

EU AI Act Template

Policy configuration for compliance with the EU AI Act, targeting high-risk AI system requirements.

Use this page when

  • You are deploying a high-risk AI system under the EU AI Act and need human oversight, bias monitoring, and audit traceability controls.
  • You want a starting config that maps directly to EU AI Act Articles 9, 10, 12, 14, and 15.
  • You want to go from zero to a running EU AI Act–compliant gateway with kt init --template eu-ai-act.

Primary audience

  • Primary: Technical Engineers
  • Secondary: AI Agents, Technical Leaders

Policy Config

pack:
name: eu-ai-act
version: 0.1.0
enabled: true
description: EU AI Act high-risk system compliance
policies:
chain:
- prompt-injection
- pii-detector
- human-oversight
- bias-monitor
- quality-scorer
- audit-logger
policy:
prompt-injection:
response:
action: block
message: "Request blocked: potential prompt injection detected"
pii-detector:
action: redact
human-oversight:
require_human_for:
- hiring_actions
- credit_scoring
- law_enforcement
action: escalate
confidence_threshold: 0.6
default_assignee: eu-ai-review@example.com
timeout_seconds: 1800
bias-monitor:
protected_characteristics:
- gender
- ethnicity
- age
- disability
action: escalate
threshold: 0.6
quality-scorer:
benchmarks:
coherence: true
completeness: true
thresholds:
min_aggregate: 0.7
min_coherence: 0.75
min_completeness: 0.8
failure_action:
action: block
audit-logger:
immutable: true
retention_days: 1825
log_all_access: true
providers:
targets:
- id: openai-eu
provider: openai
model: gpt-4o-mini
secret_key_ref:
env: OPENAI_API_KEY

What It Enforces

PolicyEU AI Act Requirement
human-oversightArticle 14 — Human oversight of high-risk AI
bias-monitorArticle 10 — Non-discrimination and fairness
quality-scorerArticle 9 — Accuracy, robustness, cybersecurity
audit-loggerArticle 12 — Record-keeping and traceability
pii-detectorGDPR alignment — Personal data protection
prompt-injectionArticle 15 — Robustness against adversarial inputs

Quick Start

# Save the Policy Config example on this page as policy-config.yaml
export OPENAI_API_KEY="sk-your-openai-key"
kt policy lint --file policy-config.yaml
kt gateway run \
--listen 0.0.0.0:41002 \
--policy-config policy-config.yaml

Use OPENAI_API_KEY for the provider secret. The example config is runnable as written and keeps the credential outside YAML via secret_key_ref.

If you prefer the seeded starter, run kt init --template eu-ai-act first and then add the provider block shown in the example config before linting and running.

Customization Ideas

  • Add safety-filter for content moderation requirements
  • Tighten bias-monitor.threshold to 0.4 for stricter fairness detection
  • Add data-routing-policy to restrict data flow to EU-region providers
  • Increase audit-logger.retention_days to match your system lifecycle

For AI systems

  • Canonical terms: Keeptrusts, eu-ai-act, policy-config.yaml, kt init --template eu-ai-act, human-oversight, bias-monitor, quality-scorer, audit-logger, EU AI Act Article 14, Article 10, Article 12.
  • Related policy kinds: prompt-injection, pii-detector, human-oversight, bias-monitor, quality-scorer, audit-logger.
  • Best next pages: Compliance Policies Configuration, Bias Monitor policy, Templates overview.

For engineers

  • Prerequisites: kt CLI installed, an LLM provider API key, escalation routing configured for the human-oversight policy.
  • Validate: kt policy lint --file policy-config.yaml must pass. Test by sending a prompt that triggers bias detection (e.g., demographic stereotypes) and confirm escalation.
  • Key tuning: lower bias-monitor.threshold (default 0.6) for stricter fairness detection; adjust human-oversight.timeout_seconds based on your review SLA.
  • Add data-routing-policy with EU-region restriction if data residency is required.

For leaders

  • This template addresses Articles 9, 10, 12, 14, and 15 of the EU AI Act for high-risk system classification.
  • Human-oversight escalation with approval gates satisfies the Article 14 mandate for meaningful human control.
  • Bias monitoring with protected-category detection (gender, ethnicity, age, disability) demonstrates Article 10 compliance for non-discrimination.
  • The 5-year audit retention (1,825 days) covers the expected AI system lifecycle documentation requirements.
  • Pair with a data-routing-policy to ensure data stays within EU jurisdiction.

Next steps