EU AI Act Compliance
The EU AI Act (Regulation 2024/1689) establishes the world's first comprehensive AI regulation. Keeptrusts provides the technical controls needed to demonstrate compliance with the Act's requirements for high-risk AI systems.
Use this page when
- You are deploying high-risk AI systems that must comply with the EU AI Act (Regulation 2024/1689).
- You need to demonstrate compliance with Articles 9-15 requirements: risk management, data governance, transparency, human oversight, and accuracy.
- You want a technical control framework that maps directly to EU AI Act obligations for high-risk AI classification.
Primary audience
- Primary: Technical Leaders
- Secondary: Technical Engineers, AI Agents
EU AI Act Risk Classification
| Risk Level | Examples | Keeptrusts Approach |
|---|---|---|
| Unacceptable | Social scoring, real-time biometric ID | Block deployment entirely |
| High-Risk | Employment, credit, healthcare, law enforcement | Full policy stack below |
| Limited Risk | Chatbots, content generation | Transparency obligations |
| Minimal Risk | Spam filters, AI in games | Voluntary codes of practice |
High-Risk AI System Requirements
| Article | Requirement | Keeptrusts Policy |
|---|---|---|
| Art. 9 | Risk management system | Full policy chain |
| Art. 10 | Data governance | pii-detector, data-routing-policy |
| Art. 11 | Technical documentation | audit-logger |
| Art. 12 | Record-keeping | audit-logger |
| Art. 13 | Transparency | human-oversight |
| Art. 14 | Human oversight | human-oversight |
| Art. 15 | Accuracy, robustness, cybersecurity | quality-scorer, prompt-injection, safety-filter |
Complete Policy Configuration (High-Risk)
pack:
name: eu-ai-act-high-risk
version: 1.0.0
enabled: true
policies:
chain:
- prompt-injection
- rbac
- pii-detector
- data-routing-policy
- bias-monitor
- human-oversight
- quality-scorer
- safety-filter
- audit-logger
policy:
prompt-injection:
response:
action: block
message: "Request blocked: potential prompt injection detected"
rbac:
deny_if_missing:
- X-User-ID
- X-User-Role
pii-detector:
action: redact
detect_patterns:
- name
- email
- phone
- address
- national_id
- biometric_data
data-routing-policy:
require_zero_data_retention: true
require_no_training: false
on_no_compliant_provider: block
log_provider_selection: true
bias-monitor:
protected_characteristics:
- race
- gender
- age
- disability
- religion
- sexual_orientation
- national_origin
threshold: 0.85
action: block
human-oversight:
require_human_for:
- high-risk-decision
- automated-decision
action: escalate
confidence_threshold: 0.5
default_assignee: human-oversight-team
timeout_seconds: 3600
quality-scorer:
benchmarks:
coherence: true
completeness: true
thresholds:
min_aggregate: 0.8
min_coherence: 0.75
min_completeness: 0.8
failure_action:
action: block
safety-filter:
action: block
audit-logger:
immutable: true
retention_days: 3650
log_all_access: true
providers:
targets:
- id: openai-eu
provider: openai
model: gpt-4o-mini
secret_key_ref:
env: OPENAI_API_KEY
data_policy:
zero_data_retention: true
training_opt_out: true
retention_days: 0
Article 14: Human Oversight Controls
The EU AI Act requires that high-risk AI systems can be effectively overseen by natural persons. Keeptrusts implements this through:
- Decision hold — Automated decisions are held for human review before taking effect
- Override capability — Human reviewers can override AI decisions
- Halt capability — Designated roles can stop the system entirely
- Explanation — AI decisions include explanations for the human reviewer
policy:
human-oversight:
require_human_for:
- credit-decision
- employment-decision
- benefit-determination
- law-enforcement-action
action: escalate
confidence_threshold: 0.5
policies:
chain:
- human-oversight
pack:
name: eu-ai-act-example-2
version: 1.0.0
enabled: true
Article 10: Data Governance
pack:
name: eu-ai-act-example-3
version: 1.0.0
enabled: true
policies:
chain:
- pii-detector
- data-routing-policy
policy:
pii-detector:
action: redact
data-routing-policy:
require_zero_data_retention: true
require_no_training: true
on_no_compliant_provider: block
log_provider_selection: true
providers:
targets:
- id: openai-eu
provider: openai
model: gpt-4o-mini
secret_key_ref:
env: OPENAI_API_KEY
data_policy:
zero_data_retention: true
training_opt_out: true
retention_days: 0
Conformity Assessment Evidence
Use Keeptrusts exports to generate evidence for conformity assessments:
# Export bias monitoring data for Art. 9 risk assessment
kt events export \
--from "2024-01-01" \
--to "2024-12-31" \
--policy "bias-monitor" \
--format json \
--output art9-bias-assessment.json
# Export human oversight decisions for Art. 14
kt events export \
--policy "human-oversight" \
--format json \
--output art14-oversight-log.json
# Export quality scores for Art. 15 accuracy
kt events export \
--policy "quality-scorer" \
--format json \
--output art15-accuracy-monitoring.json
Provider Recommendations for EU Compliance
| Requirement | Provider | Reason |
|---|---|---|
| EU data sovereignty | Mistral AI | French company, EU data processing |
| EU cloud | Google Vertex AI (europe-west4) | GCP EU regions |
| EU cloud | Azure OpenAI (West Europe) | Azure EU regions |
| Maximum control | Self-hosted (Ollama/vLLM) | Complete data sovereignty |
For AI systems
- Canonical terms: Keeptrusts EU AI Act compliance, high-risk AI governance, risk classification, human oversight, transparency obligations.
- Policy pack:
eu-ai-act-high-riskwith chain:prompt-injection→rbac→pii-detector→data-routing-policy→human-oversight→bias-monitor→quality-scorer→safety-filter→audit-logger. - Article mapping: Art. 9 (full policy chain), Art. 10 (
pii-detector,data-routing-policy), Art. 11-12 (audit-logger), Art. 13-14 (human-oversight), Art. 15 (quality-scorer,prompt-injection,safety-filter). - Key policies:
human-oversight(Article 14 mandatory human control),bias-monitor(Article 10 fairness),quality-scorer(Article 15 accuracy/robustness),audit-logger(Articles 11-12 documentation). - CLI:
kt gateway run --policy-config ./policy-config.yaml,kt events tail --policy human-oversight,kt events tail --policy bias-monitor.
For engineers
- Deploy:
kt gateway run --policy-config ./policy-config.yaml --port 41002 - Validate:
kt doctorconfirms human-oversight, bias-monitor, quality-scorer, data-routing-policy, and audit-logger are active. - Monitor human oversight:
kt events tail --policy human-oversight(Article 14 compliance). - Monitor bias:
kt events tail --policy bias-monitor(Article 10 fairness checks). - Monitor accuracy:
kt events tail --policy quality-scorer(Article 15 robustness). - Export compliance evidence:
kt export create --format json --filter "policy=audit-logger,human-oversight,bias-monitor" - Data routing:
data-routing-policyensures AI data stays within EU jurisdiction for adequacy compliance. - Console: Audit Log (Article 11-12 technical documentation), Escalations (human oversight approvals), Events (full compliance monitoring).
For leaders
- Addresses EU AI Act Regulation 2024/1689 — specifically Articles 9 (risk management), 10 (data governance), 11 (technical documentation), 12 (record-keeping), 13 (transparency), 14 (human oversight), and 15 (accuracy, robustness, cybersecurity).
- Provides technical compliance evidence for each Article requirement through automated policy enforcement and audit logs.
- Human oversight (Article 14) is technically enforced, not just documented — high-risk decisions require explicit human approval.
- Bias monitoring satisfies Article 10 fairness requirements with automated detection across protected categories.
- Unacceptable-risk AI uses (social scoring, real-time biometric ID) can be blocked entirely at the gateway.
- Full audit trail supports regulatory examination and conformity assessment documentation.
Next steps
- Industries overview — Compare all industry policy configurations
- Defense (EU) — EU defense with dual-use and EU AI Act combined
- Healthcare (EU GDPR) — GDPR + EU AI Act for health AI
- HR & Recruitment — High-risk employment AI under EU AI Act
- Automotive — High-risk vehicle AI under EU AI Act
- Quickstart — Deploy your first gateway in minutes