Skip to main content
Browse docs
By Audience
Getting Started
Configuration
Use Cases
IDE Integration
Third-Party Integrations
Engineering Cache
Console
API Reference
Gateway
Workflow Guides
Templates
Providers and SDKs
Industry Guides
Advanced Guides
Browse by Role
Deployment Guides
In-Depth Guides
Tutorials
FAQ

EU AI Act Compliance

The EU AI Act (Regulation 2024/1689) establishes the world's first comprehensive AI regulation. Keeptrusts provides the technical controls needed to demonstrate compliance with the Act's requirements for high-risk AI systems.

Use this page when

  • You are deploying high-risk AI systems that must comply with the EU AI Act (Regulation 2024/1689).
  • You need to demonstrate compliance with Articles 9-15 requirements: risk management, data governance, transparency, human oversight, and accuracy.
  • You want a technical control framework that maps directly to EU AI Act obligations for high-risk AI classification.

Primary audience

  • Primary: Technical Leaders
  • Secondary: Technical Engineers, AI Agents

EU AI Act Risk Classification

Risk LevelExamplesKeeptrusts Approach
UnacceptableSocial scoring, real-time biometric IDBlock deployment entirely
High-RiskEmployment, credit, healthcare, law enforcementFull policy stack below
Limited RiskChatbots, content generationTransparency obligations
Minimal RiskSpam filters, AI in gamesVoluntary codes of practice

High-Risk AI System Requirements

ArticleRequirementKeeptrusts Policy
Art. 9Risk management systemFull policy chain
Art. 10Data governancepii-detector, data-routing-policy
Art. 11Technical documentationaudit-logger
Art. 12Record-keepingaudit-logger
Art. 13Transparencyhuman-oversight
Art. 14Human oversighthuman-oversight
Art. 15Accuracy, robustness, cybersecurityquality-scorer, prompt-injection, safety-filter

Complete Policy Configuration (High-Risk)

pack:
name: eu-ai-act-high-risk
version: 1.0.0
enabled: true
policies:
chain:
- prompt-injection
- rbac
- pii-detector
- data-routing-policy
- bias-monitor
- human-oversight
- quality-scorer
- safety-filter
- audit-logger
policy:
prompt-injection:
response:
action: block
message: "Request blocked: potential prompt injection detected"
rbac:
deny_if_missing:
- X-User-ID
- X-User-Role
pii-detector:
action: redact
detect_patterns:
- name
- email
- phone
- address
- national_id
- biometric_data
data-routing-policy:
require_zero_data_retention: true
require_no_training: false
on_no_compliant_provider: block
log_provider_selection: true
bias-monitor:
protected_characteristics:
- race
- gender
- age
- disability
- religion
- sexual_orientation
- national_origin
threshold: 0.85
action: block
human-oversight:
require_human_for:
- high-risk-decision
- automated-decision
action: escalate
confidence_threshold: 0.5
default_assignee: human-oversight-team
timeout_seconds: 3600
quality-scorer:
benchmarks:
coherence: true
completeness: true
thresholds:
min_aggregate: 0.8
min_coherence: 0.75
min_completeness: 0.8
failure_action:
action: block
safety-filter:
action: block
audit-logger:
immutable: true
retention_days: 3650
log_all_access: true
providers:
targets:
- id: openai-eu
provider: openai
model: gpt-4o-mini
secret_key_ref:
env: OPENAI_API_KEY
data_policy:
zero_data_retention: true
training_opt_out: true
retention_days: 0

Article 14: Human Oversight Controls

The EU AI Act requires that high-risk AI systems can be effectively overseen by natural persons. Keeptrusts implements this through:

  1. Decision hold — Automated decisions are held for human review before taking effect
  2. Override capability — Human reviewers can override AI decisions
  3. Halt capability — Designated roles can stop the system entirely
  4. Explanation — AI decisions include explanations for the human reviewer
policy:
human-oversight:
require_human_for:
- credit-decision
- employment-decision
- benefit-determination
- law-enforcement-action
action: escalate
confidence_threshold: 0.5
policies:
chain:
- human-oversight
pack:
name: eu-ai-act-example-2
version: 1.0.0
enabled: true

Article 10: Data Governance

pack:
name: eu-ai-act-example-3
version: 1.0.0
enabled: true
policies:
chain:
- pii-detector
- data-routing-policy
policy:
pii-detector:
action: redact
data-routing-policy:
require_zero_data_retention: true
require_no_training: true
on_no_compliant_provider: block
log_provider_selection: true
providers:
targets:
- id: openai-eu
provider: openai
model: gpt-4o-mini
secret_key_ref:
env: OPENAI_API_KEY
data_policy:
zero_data_retention: true
training_opt_out: true
retention_days: 0

Conformity Assessment Evidence

Use Keeptrusts exports to generate evidence for conformity assessments:

# Export bias monitoring data for Art. 9 risk assessment
kt events export \
--from "2024-01-01" \
--to "2024-12-31" \
--policy "bias-monitor" \
--format json \
--output art9-bias-assessment.json

# Export human oversight decisions for Art. 14
kt events export \
--policy "human-oversight" \
--format json \
--output art14-oversight-log.json

# Export quality scores for Art. 15 accuracy
kt events export \
--policy "quality-scorer" \
--format json \
--output art15-accuracy-monitoring.json

Provider Recommendations for EU Compliance

RequirementProviderReason
EU data sovereigntyMistral AIFrench company, EU data processing
EU cloudGoogle Vertex AI (europe-west4)GCP EU regions
EU cloudAzure OpenAI (West Europe)Azure EU regions
Maximum controlSelf-hosted (Ollama/vLLM)Complete data sovereignty

For AI systems

  • Canonical terms: Keeptrusts EU AI Act compliance, high-risk AI governance, risk classification, human oversight, transparency obligations.
  • Policy pack: eu-ai-act-high-risk with chain: prompt-injectionrbacpii-detectordata-routing-policyhuman-oversightbias-monitorquality-scorersafety-filteraudit-logger.
  • Article mapping: Art. 9 (full policy chain), Art. 10 (pii-detector, data-routing-policy), Art. 11-12 (audit-logger), Art. 13-14 (human-oversight), Art. 15 (quality-scorer, prompt-injection, safety-filter).
  • Key policies: human-oversight (Article 14 mandatory human control), bias-monitor (Article 10 fairness), quality-scorer (Article 15 accuracy/robustness), audit-logger (Articles 11-12 documentation).
  • CLI: kt gateway run --policy-config ./policy-config.yaml, kt events tail --policy human-oversight, kt events tail --policy bias-monitor.

For engineers

  • Deploy: kt gateway run --policy-config ./policy-config.yaml --port 41002
  • Validate: kt doctor confirms human-oversight, bias-monitor, quality-scorer, data-routing-policy, and audit-logger are active.
  • Monitor human oversight: kt events tail --policy human-oversight (Article 14 compliance).
  • Monitor bias: kt events tail --policy bias-monitor (Article 10 fairness checks).
  • Monitor accuracy: kt events tail --policy quality-scorer (Article 15 robustness).
  • Export compliance evidence: kt export create --format json --filter "policy=audit-logger,human-oversight,bias-monitor"
  • Data routing: data-routing-policy ensures AI data stays within EU jurisdiction for adequacy compliance.
  • Console: Audit Log (Article 11-12 technical documentation), Escalations (human oversight approvals), Events (full compliance monitoring).

For leaders

  • Addresses EU AI Act Regulation 2024/1689 — specifically Articles 9 (risk management), 10 (data governance), 11 (technical documentation), 12 (record-keeping), 13 (transparency), 14 (human oversight), and 15 (accuracy, robustness, cybersecurity).
  • Provides technical compliance evidence for each Article requirement through automated policy enforcement and audit logs.
  • Human oversight (Article 14) is technically enforced, not just documented — high-risk decisions require explicit human approval.
  • Bias monitoring satisfies Article 10 fairness requirements with automated detection across protected categories.
  • Unacceptable-risk AI uses (social scoring, real-time biometric ID) can be blocked entirely at the gateway.
  • Full audit trail supports regulatory examination and conformity assessment documentation.

Next steps