Skip to main content
Browse docs
By Audience
Getting Started
Configuration
Use Cases
IDE Integration
Third-Party Integrations
Engineering Cache
Console
API Reference
Gateway
Workflow Guides
Templates
Providers and SDKs
Industry Guides
Advanced Guides
Browse by Role
Deployment Guides
In-Depth Guides
Tutorials
FAQ

Meet EU AI Act Requirements with Pre-Built Controls

The EU AI Act imposes specific obligations on organizations deploying AI systems — particularly those classified as high-risk. Keeptrusts maps directly to these requirements with pre-built controls for human oversight, bias monitoring, quality assurance, and audit logging.

Use this page when

  • You deploy AI systems classified as high-risk under the EU AI Act and need to satisfy specific Article obligations.
  • You need pre-built controls for human oversight (Article 14), bias monitoring (Article 10/27), and record-keeping (Article 12).
  • You are preparing evidence packages for EU AI Act regulatory submissions or conformity assessments.

Primary audience

  • Primary: Technical Leaders
  • Secondary: Technical Engineers, AI Agents

What you'll achieve

  • Human oversight enforcement with mandatory escalation for high-risk decisions
  • Bias detection and monitoring across protected attributes
  • Quality scoring with configurable thresholds that flag low-quality outputs
  • Risk classification labeling for every AI interaction
  • Immutable audit trail satisfying Article 12 record-keeping requirements
  • Evidence export ready for regulatory submission

EU AI Act requirement mapping

EU AI Act ArticleRequirementKeeptrusts control
Article 9Risk management systemRisk classification + policy chain
Article 10Data governancedata-routing-policy + pii-detector
Article 12Record-keepingaudit-logger with immutable: true
Article 13TransparencyEvent logging with policy outcome metadata
Article 14Human oversighthuman-oversight policy + escalation queue
Article 15Accuracy and robustnessquality-scorer + citation-verifier
Article 26Deployer obligationsTemplate-based controls + evidence export
Article 27Fundamental rights assessmentbias-monitor + export evidence

Human oversight (Article 14)

The EU AI Act requires that high-risk AI systems include appropriate human oversight measures. Keeptrusts enforces this through the human-oversight policy:

policies:
chain:
- human-oversight
- quality-scorer
- audit-logger

policy:
human-oversight:
escalate_on:
- high_risk_classification
- low_quality_score
- bias_detected
require_resolution_within_hours: 24

When a request triggers any of the escalation conditions:

  1. The response is flagged in the Escalation queue
  2. A reviewer must claim the escalation and record a resolution
  3. The escalation lifecycle (created → claimed → resolved) is fully auditable

This satisfies the requirement that humans can "intervene on the operation of the high-risk AI system or interrupt the system."


Bias monitoring (Article 10 / Article 27)

The bias-monitor policy detects potential bias across configured attributes:

policies:
chain:
- bias-monitor
- human-oversight
- audit-logger

policy:
bias-monitor:
protected_attributes:
- gender
- ethnicity
- age
- disability
- religion
action: escalate
threshold: 0.7

When potential bias is detected above the threshold, the request is escalated for human review. Over time, bias detection events provide statistical evidence for fundamental rights impact assessments.


Quality scoring (Article 15)

The EU AI Act requires that AI systems achieve "appropriate levels of accuracy, robustness and cybersecurity." The quality-scorer policy enforces output quality thresholds:

policies:
chain:
- quality-scorer
- citation-verifier
- audit-logger

policy:
quality-scorer:
min_output_chars: 100
min_sentences: 2
max_repetition_ratio: 0.3
on_low_quality: escalate

citation-verifier:
require_grounding: true
min_citation_coverage: 0.6
on_ungrounded: escalate

The citation-verifier adds a groundedness check — verifying that AI outputs are supported by provided context rather than hallucinated.


Audit logging (Article 12)

Article 12 requires automatic recording of events ("logs") throughout the AI system's lifecycle. Keeptrusts satisfies this with immutable event logging:

policy:
audit-logger:
immutable: true
retention_days: 3650
log_all_access: true
pack:
name: meet-eu-ai-act-example-4
version: 1.0.0
enabled: true
policies:
chain:
- audit-logger

Every request generates an event record containing:

  • Timestamp and unique request ID
  • Policy evaluation outcomes (pass/fail/escalate for each policy in the chain)
  • Redaction decisions and categories
  • Provider routing decisions
  • Human oversight escalation status
  • Quality scores and bias detection results

These records cannot be modified within the retention period and can be exported for regulatory review.


Full EU AI Act compliance configuration

pack:
name: eu-ai-act-compliance
version: "1.0"
description: EU AI Act Article 9-15 compliance controls

policies:
chain:
- pii-detector
- bias-monitor
- prompt-injection
- quality-scorer
- citation-verifier
- human-oversight
- audit-logger

policy:
pii-detector:
action: redact
redaction:
marker_format: label
include_metadata: true

bias-monitor:
protected_attributes:
- gender
- ethnicity
- age
- disability
- religion
action: escalate
threshold: 0.7

prompt-injection:
embedding_threshold: 0.8
response:
action: block

quality-scorer:
min_output_chars: 100
min_sentences: 2
on_low_quality: escalate

citation-verifier:
require_grounding: true
min_citation_coverage: 0.6
on_ungrounded: escalate

human-oversight:
escalate_on:
- high_risk_classification
- low_quality_score
- bias_detected
require_resolution_within_hours: 24

audit-logger:
immutable: true
retention_days: 3650
include_request_metadata: true
include_policy_outcomes: true

Evidence for regulatory submission

When regulators request evidence of compliance, export a complete package:

  1. Policy configuration — the active config version showing all controls in force
  2. Event export — filtered by time range and policy type
  3. Escalation records — proving human oversight was exercised
  4. Bias monitoring data — aggregate statistics from bias-monitor events
  5. Quality score distribution — showing output quality trends over time
# Export EU AI Act evidence for Q1 2026
kt export create \
--format json \
--from "2026-01-01T00:00:00Z" \
--to "2026-03-31T23:59:59Z" \
--include-policy-outcomes \
--include-escalations

Quick wins

  1. Deploy the EU AI Act template — immediate coverage for Articles 12–15
  2. Enable human-oversight — satisfy the human oversight requirement from day one
  3. Configure bias-monitor — start collecting bias detection data for impact assessments
  4. Set audit-logger to immutable: true with multi-year retention — Article 12 compliance
  5. Create your first evidence export — verify the export workflow before regulators ask

For AI systems

  • Canonical terms: human-oversight policy, bias-monitor, quality-scorer, audit-logger (immutable), risk classification, escalation queue.
  • Config keys: policy.human-oversight.escalate_on, policy.bias-monitor.protected_attributes, policy.audit-logger.immutable, policy.quality-scorer.overall_min_score.
  • Article mapping: Art. 9 → risk classification, Art. 12 → audit-logger, Art. 14 → human-oversight, Art. 15 → quality-scorer.
  • Best next pages: EU AI Act Template, Pass Compliance Audits, Escalations.

For engineers

  • Prerequisites: gateway running with human-oversight, bias-monitor, quality-scorer, and audit-logger in the chain.
  • Set audit-logger.immutable: true and retention_days: 2555 for Article 12 record-keeping.
  • Configure human-oversight.escalate_on to include high_risk_classification and low_quality_score.
  • Validate: trigger an escalation and confirm it appears in the Escalations queue with a 24-hour resolution requirement.
  • Export evidence: generate a CSV/PDF export scoped to the audit period for regulatory submission.

For leaders

  • The EU AI Act enters full enforcement in 2026; non-compliance penalties reach €35M or 7% of global revenue.
  • Pre-built controls map directly to Article requirements — no custom development needed for baseline compliance.
  • Human oversight enforcement (mandatory escalation + resolution tracking) satisfies Article 14 intervention requirements.
  • Evidence exports produce audit-ready packages in minutes, reducing regulator response time from weeks to days.

Next steps