Meet EU AI Act Requirements with Pre-Built Controls
The EU AI Act imposes specific obligations on organizations deploying AI systems — particularly those classified as high-risk. Keeptrusts maps directly to these requirements with pre-built controls for human oversight, bias monitoring, quality assurance, and audit logging.
Use this page when
- You deploy AI systems classified as high-risk under the EU AI Act and need to satisfy specific Article obligations.
- You need pre-built controls for human oversight (Article 14), bias monitoring (Article 10/27), and record-keeping (Article 12).
- You are preparing evidence packages for EU AI Act regulatory submissions or conformity assessments.
Primary audience
- Primary: Technical Leaders
- Secondary: Technical Engineers, AI Agents
What you'll achieve
- Human oversight enforcement with mandatory escalation for high-risk decisions
- Bias detection and monitoring across protected attributes
- Quality scoring with configurable thresholds that flag low-quality outputs
- Risk classification labeling for every AI interaction
- Immutable audit trail satisfying Article 12 record-keeping requirements
- Evidence export ready for regulatory submission
EU AI Act requirement mapping
| EU AI Act Article | Requirement | Keeptrusts control |
|---|---|---|
| Article 9 | Risk management system | Risk classification + policy chain |
| Article 10 | Data governance | data-routing-policy + pii-detector |
| Article 12 | Record-keeping | audit-logger with immutable: true |
| Article 13 | Transparency | Event logging with policy outcome metadata |
| Article 14 | Human oversight | human-oversight policy + escalation queue |
| Article 15 | Accuracy and robustness | quality-scorer + citation-verifier |
| Article 26 | Deployer obligations | Template-based controls + evidence export |
| Article 27 | Fundamental rights assessment | bias-monitor + export evidence |
Human oversight (Article 14)
The EU AI Act requires that high-risk AI systems include appropriate human oversight measures. Keeptrusts enforces this through the human-oversight policy:
policies:
chain:
- human-oversight
- quality-scorer
- audit-logger
policy:
human-oversight:
escalate_on:
- high_risk_classification
- low_quality_score
- bias_detected
require_resolution_within_hours: 24
When a request triggers any of the escalation conditions:
- The response is flagged in the Escalation queue
- A reviewer must claim the escalation and record a resolution
- The escalation lifecycle (created → claimed → resolved) is fully auditable
This satisfies the requirement that humans can "intervene on the operation of the high-risk AI system or interrupt the system."
Bias monitoring (Article 10 / Article 27)
The bias-monitor policy detects potential bias across configured attributes:
policies:
chain:
- bias-monitor
- human-oversight
- audit-logger
policy:
bias-monitor:
protected_attributes:
- gender
- ethnicity
- age
- disability
- religion
action: escalate
threshold: 0.7
When potential bias is detected above the threshold, the request is escalated for human review. Over time, bias detection events provide statistical evidence for fundamental rights impact assessments.
Quality scoring (Article 15)
The EU AI Act requires that AI systems achieve "appropriate levels of accuracy, robustness and cybersecurity." The quality-scorer policy enforces output quality thresholds:
policies:
chain:
- quality-scorer
- citation-verifier
- audit-logger
policy:
quality-scorer:
min_output_chars: 100
min_sentences: 2
max_repetition_ratio: 0.3
on_low_quality: escalate
citation-verifier:
require_grounding: true
min_citation_coverage: 0.6
on_ungrounded: escalate
The citation-verifier adds a groundedness check — verifying that AI outputs are supported by provided context rather than hallucinated.
Audit logging (Article 12)
Article 12 requires automatic recording of events ("logs") throughout the AI system's lifecycle. Keeptrusts satisfies this with immutable event logging:
policy:
audit-logger:
immutable: true
retention_days: 3650
log_all_access: true
pack:
name: meet-eu-ai-act-example-4
version: 1.0.0
enabled: true
policies:
chain:
- audit-logger
Every request generates an event record containing:
- Timestamp and unique request ID
- Policy evaluation outcomes (pass/fail/escalate for each policy in the chain)
- Redaction decisions and categories
- Provider routing decisions
- Human oversight escalation status
- Quality scores and bias detection results
These records cannot be modified within the retention period and can be exported for regulatory review.
Full EU AI Act compliance configuration
pack:
name: eu-ai-act-compliance
version: "1.0"
description: EU AI Act Article 9-15 compliance controls
policies:
chain:
- pii-detector
- bias-monitor
- prompt-injection
- quality-scorer
- citation-verifier
- human-oversight
- audit-logger
policy:
pii-detector:
action: redact
redaction:
marker_format: label
include_metadata: true
bias-monitor:
protected_attributes:
- gender
- ethnicity
- age
- disability
- religion
action: escalate
threshold: 0.7
prompt-injection:
embedding_threshold: 0.8
response:
action: block
quality-scorer:
min_output_chars: 100
min_sentences: 2
on_low_quality: escalate
citation-verifier:
require_grounding: true
min_citation_coverage: 0.6
on_ungrounded: escalate
human-oversight:
escalate_on:
- high_risk_classification
- low_quality_score
- bias_detected
require_resolution_within_hours: 24
audit-logger:
immutable: true
retention_days: 3650
include_request_metadata: true
include_policy_outcomes: true
Evidence for regulatory submission
When regulators request evidence of compliance, export a complete package:
- Policy configuration — the active config version showing all controls in force
- Event export — filtered by time range and policy type
- Escalation records — proving human oversight was exercised
- Bias monitoring data — aggregate statistics from
bias-monitorevents - Quality score distribution — showing output quality trends over time
# Export EU AI Act evidence for Q1 2026
kt export create \
--format json \
--from "2026-01-01T00:00:00Z" \
--to "2026-03-31T23:59:59Z" \
--include-policy-outcomes \
--include-escalations
Quick wins
- Deploy the EU AI Act template — immediate coverage for Articles 12–15
- Enable
human-oversight— satisfy the human oversight requirement from day one - Configure
bias-monitor— start collecting bias detection data for impact assessments - Set
audit-loggertoimmutable: truewith multi-year retention — Article 12 compliance - Create your first evidence export — verify the export workflow before regulators ask
For AI systems
- Canonical terms: human-oversight policy, bias-monitor, quality-scorer, audit-logger (immutable), risk classification, escalation queue.
- Config keys:
policy.human-oversight.escalate_on,policy.bias-monitor.protected_attributes,policy.audit-logger.immutable,policy.quality-scorer.overall_min_score. - Article mapping: Art. 9 → risk classification, Art. 12 → audit-logger, Art. 14 → human-oversight, Art. 15 → quality-scorer.
- Best next pages: EU AI Act Template, Pass Compliance Audits, Escalations.
For engineers
- Prerequisites: gateway running with
human-oversight,bias-monitor,quality-scorer, andaudit-loggerin the chain. - Set
audit-logger.immutable: trueandretention_days: 2555for Article 12 record-keeping. - Configure
human-oversight.escalate_onto includehigh_risk_classificationandlow_quality_score. - Validate: trigger an escalation and confirm it appears in the Escalations queue with a 24-hour resolution requirement.
- Export evidence: generate a CSV/PDF export scoped to the audit period for regulatory submission.
For leaders
- The EU AI Act enters full enforcement in 2026; non-compliance penalties reach €35M or 7% of global revenue.
- Pre-built controls map directly to Article requirements — no custom development needed for baseline compliance.
- Human oversight enforcement (mandatory escalation + resolution tracking) satisfies Article 14 intervention requirements.
- Evidence exports produce audit-ready packages in minutes, reducing regulator response time from weeks to days.
Next steps
- EU AI Act Template — ready-to-deploy template with detailed configuration
- Pass Compliance Audits — broader audit readiness guide
- Escalations — human oversight reviewer workflow
- Quality Benchmarking Template — advanced quality controls
- Export Evidence — step-by-step export workflow