Skip to main content
Browse docs
By Audience
Getting Started
Configuration
Use Cases
IDE Integration
Third-Party Integrations
Engineering Cache
Console
API Reference
Gateway
Workflow Guides
Templates
Providers and SDKs
Industry Guides
Advanced Guides
Browse by Role
Deployment Guides
In-Depth Guides
Tutorials
FAQ

Automotive Use Case

Automotive organizations deploying AI in vehicle systems, manufacturing, and customer services must comply with functional safety standards and consumer protection regulations. Keeptrusts enforces safety-critical output validation and quality assurance for automotive AI.

Use this page when

  • You are deploying AI in vehicle systems, manufacturing quality, or connected vehicle services subject to ISO 26262 and UNECE WP.29.
  • You need policy controls that enforce functional safety validation, GDPR-compliant connected vehicle data handling, and EU AI Act high-risk oversight.
  • You want to ensure safety-critical AI outputs are validated before entering engineering workflows.

Primary audience

  • Primary: Technical Leaders
  • Secondary: Technical Engineers, AI Agents

Regulatory Requirements

StandardRequirementKeeptrusts Policy
ISO 26262Functional safetysafety-filter, quality-scorer
UNECE WP.29Automated driving systemshuman-oversight
GDPRConnected vehicle data privacypii-detector, data-routing-policy
EU AI ActHigh-risk AI (vehicles)human-oversight, bias-monitor
Product liabilitySafe outputssafety-filter, audit-logger

Complete Policy Configuration

pack:
name: automotive-governance
version: 1.0.0
enabled: true
policies:
chain:
- prompt-injection
- rbac
- safety-filter
- pii-detector
- dlp-filter
- quality-scorer
- human-oversight
- bias-monitor
- audit-logger
policy:
prompt-injection: {}
rbac:
deny_if_missing:
- X-User-ID
- X-User-Role
safety-filter:
action: block
pii-detector:
action: redact
detect_patterns:
- name
- vin
- license_plate
- address
- phone
- email
dlp-filter:
detect_patterns:
- '\b[A-HJ-NPR-Z0-9]{17}\b'
- '\bECU-[A-Z0-9]{4,8}\b'
- '\bCAL-[0-9]{8,12}\b'
action: redact
quality-scorer:
thresholds:
min_aggregate: 0.85
human-oversight:
require_human_for:
- safety-recall-analysis
- autonomous-driving-decision
- component-failure-assessment
action: escalate
confidence_threshold: 0.5
default_assignee: engineering-review
bias-monitor:
protected_characteristics:
- geographic
- socioeconomic
threshold: 0.85
action: escalate
audit-logger:
immutable: true
retention_days: 3650
log_all_access: true

Example Scenarios

AI System: "Override the brake system threshold from 0.8 to 0.3
for better performance."

→ safety-filter BLOCKS
Reason: Safety-critical parameter modification detected
ISO 26262 ASIL-D: Requires human verification
Escalated to: engineering-review

Connected Vehicle Data

For connected car AI that processes telematics and driver behavior data:

pack:
name: automotive-example-2
version: 1.0.0
enabled: true
policies:
chain:
- pii-detector
- data-routing-policy
- audit-logger
policy:
pii-detector:
action: redact
detect_patterns:
- driver_name
- location
- driving_behavior
- biometric_data
data-routing-policy:
require_zero_data_retention: true
require_no_training: false
on_no_compliant_provider: block
log_provider_selection: true
audit-logger:
retention_days: 365
providers:
targets:
- id: openai-primary
provider: openai
model: gpt-4o-mini
secret_key_ref:
env: OPENAI_API_KEY
data_policy:
zero_data_retention: true
training_opt_out: true
retention_days: 0

For AI systems

  • Canonical terms: Keeptrusts automotive governance, functional safety, connected vehicle data protection.
  • Policy pack: automotive-governance with chain: prompt-injectionrbacsafety-filterpii-detectordlp-filterhuman-oversightbias-monitorquality-scoreraudit-logger.
  • Key policies: safety-filter (ISO 26262 functional safety), human-oversight (UNECE WP.29, EU AI Act), pii-detector with gdpr_mode (connected vehicle data), data-routing-policy (EU data sovereignty), quality-scorer (output validation), bias-monitor (EU AI Act fairness).
  • PII detection: driver_name, location, driving_behavior, biometric_data.
  • CLI: kt gateway run --policy-config ./policy-config.yaml, kt events tail --policy safety-filter, kt doctor.

For engineers

  • Deploy: kt gateway run --policy-config ./policy-config.yaml --port 41002
  • Validate: kt doctor confirms safety-filter, human-oversight, and data-routing-policy are active.
  • Monitor safety: kt events tail --policy safety-filter (catches unsafe outputs for vehicle systems).
  • Monitor human oversight: kt events tail --policy human-oversight (EU AI Act compliance).
  • Data routing enforcement: data-routing-policy with allowed_regions: ["eu"] and block_if_outside: true.
  • Console: Dashboard (usage by vehicle program), Events (filter by safety-filter), Escalations (route to chief engineer).

For leaders

  • Addresses ISO 26262 (functional safety), UNECE WP.29 (automated driving), GDPR (connected vehicle privacy), EU AI Act (high-risk AI for vehicles), and product liability requirements.
  • Safety-critical AI outputs are validated before entering engineering workflows — reducing risk of incorrect outputs in ASIL-rated systems.
  • Human oversight enforcement satisfies EU AI Act Article 14 requirements for high-risk vehicle AI.
  • Data routing policy ensures connected vehicle data stays within EU boundaries, meeting GDPR transfer requirements.
  • Audit trail provides evidence for type-approval documentation and product liability defense.

Next steps