EU AI Act Template
Policy configuration for compliance with the EU AI Act, targeting high-risk AI system requirements.
Use this page when
- You are deploying a high-risk AI system under the EU AI Act and need human oversight, bias monitoring, and audit traceability controls.
- You want a starting config that maps directly to EU AI Act Articles 9, 10, 12, 14, and 15.
- You want to go from zero to a running EU AI Act–compliant gateway with
kt init --template eu-ai-act.
Primary audience
- Primary: Technical Engineers
- Secondary: AI Agents, Technical Leaders
Policy Config
pack:
name: eu-ai-act
version: 0.1.0
enabled: true
description: EU AI Act high-risk system compliance
policies:
chain:
- prompt-injection
- pii-detector
- human-oversight
- bias-monitor
- quality-scorer
- audit-logger
policy:
prompt-injection:
response:
action: block
message: "Request blocked: potential prompt injection detected"
pii-detector:
action: redact
human-oversight:
require_human_for:
- hiring_actions
- credit_scoring
- law_enforcement
action: escalate
confidence_threshold: 0.6
default_assignee: eu-ai-review@example.com
timeout_seconds: 1800
bias-monitor:
protected_characteristics:
- gender
- ethnicity
- age
- disability
action: escalate
threshold: 0.6
quality-scorer:
benchmarks:
coherence: true
completeness: true
thresholds:
min_aggregate: 0.7
min_coherence: 0.75
min_completeness: 0.8
failure_action:
action: block
audit-logger:
immutable: true
retention_days: 1825
log_all_access: true
providers:
targets:
- id: openai-eu
provider: openai
model: gpt-4o-mini
secret_key_ref:
env: OPENAI_API_KEY
What It Enforces
| Policy | EU AI Act Requirement |
|---|---|
human-oversight | Article 14 — Human oversight of high-risk AI |
bias-monitor | Article 10 — Non-discrimination and fairness |
quality-scorer | Article 9 — Accuracy, robustness, cybersecurity |
audit-logger | Article 12 — Record-keeping and traceability |
pii-detector | GDPR alignment — Personal data protection |
prompt-injection | Article 15 — Robustness against adversarial inputs |
Quick Start
# Save the Policy Config example on this page as policy-config.yaml
export OPENAI_API_KEY="sk-your-openai-key"
kt policy lint --file policy-config.yaml
kt gateway run \
--listen 0.0.0.0:41002 \
--policy-config policy-config.yaml
Use OPENAI_API_KEY for the provider secret. The example config is runnable as written and keeps the credential outside YAML via secret_key_ref.
If you prefer the seeded starter, run kt init --template eu-ai-act first and then add the provider block shown in the example config before linting and running.
Customization Ideas
- Add
safety-filterfor content moderation requirements - Tighten
bias-monitor.thresholdto0.4for stricter fairness detection - Add
data-routing-policyto restrict data flow to EU-region providers - Increase
audit-logger.retention_daysto match your system lifecycle
For AI systems
- Canonical terms: Keeptrusts, eu-ai-act, policy-config.yaml,
kt init --template eu-ai-act, human-oversight, bias-monitor, quality-scorer, audit-logger, EU AI Act Article 14, Article 10, Article 12. - Related policy kinds:
prompt-injection,pii-detector,human-oversight,bias-monitor,quality-scorer,audit-logger. - Best next pages: Compliance Policies Configuration, Bias Monitor policy, Templates overview.
For engineers
- Prerequisites:
ktCLI installed, an LLM provider API key, escalation routing configured for the human-oversight policy. - Validate:
kt policy lint --file policy-config.yamlmust pass. Test by sending a prompt that triggers bias detection (e.g., demographic stereotypes) and confirm escalation. - Key tuning: lower
bias-monitor.threshold(default0.6) for stricter fairness detection; adjusthuman-oversight.timeout_secondsbased on your review SLA. - Add
data-routing-policywith EU-region restriction if data residency is required.
For leaders
- This template addresses Articles 9, 10, 12, 14, and 15 of the EU AI Act for high-risk system classification.
- Human-oversight escalation with approval gates satisfies the Article 14 mandate for meaningful human control.
- Bias monitoring with protected-category detection (gender, ethnicity, age, disability) demonstrates Article 10 compliance for non-discrimination.
- The 5-year audit retention (1,825 days) covers the expected AI system lifecycle documentation requirements.
- Pair with a data-routing-policy to ensure data stays within EU jurisdiction.
Next steps
- Compliance Policies Configuration — EU AI Act, GDPR, and other compliance policies
- Bias Monitor policy — full field-level documentation for bias detection
- Templates overview — browse all available templates
- Healthcare HIPAA template — combine with healthcare compliance for EU health-tech