Automotive Use Case
Automotive organizations deploying AI in vehicle systems, manufacturing, and customer services must comply with functional safety standards and consumer protection regulations. Keeptrusts enforces safety-critical output validation and quality assurance for automotive AI.
Use this page when
- You are deploying AI in vehicle systems, manufacturing quality, or connected vehicle services subject to ISO 26262 and UNECE WP.29.
- You need policy controls that enforce functional safety validation, GDPR-compliant connected vehicle data handling, and EU AI Act high-risk oversight.
- You want to ensure safety-critical AI outputs are validated before entering engineering workflows.
Primary audience
- Primary: Technical Leaders
- Secondary: Technical Engineers, AI Agents
Regulatory Requirements
| Standard | Requirement | Keeptrusts Policy |
|---|---|---|
| ISO 26262 | Functional safety | safety-filter, quality-scorer |
| UNECE WP.29 | Automated driving systems | human-oversight |
| GDPR | Connected vehicle data privacy | pii-detector, data-routing-policy |
| EU AI Act | High-risk AI (vehicles) | human-oversight, bias-monitor |
| Product liability | Safe outputs | safety-filter, audit-logger |
Complete Policy Configuration
pack:
name: automotive-governance
version: 1.0.0
enabled: true
policies:
chain:
- prompt-injection
- rbac
- safety-filter
- pii-detector
- dlp-filter
- quality-scorer
- human-oversight
- bias-monitor
- audit-logger
policy:
prompt-injection: {}
rbac:
deny_if_missing:
- X-User-ID
- X-User-Role
safety-filter:
action: block
pii-detector:
action: redact
detect_patterns:
- name
- vin
- license_plate
- address
- phone
- email
dlp-filter:
detect_patterns:
- '\b[A-HJ-NPR-Z0-9]{17}\b'
- '\bECU-[A-Z0-9]{4,8}\b'
- '\bCAL-[0-9]{8,12}\b'
action: redact
quality-scorer:
thresholds:
min_aggregate: 0.85
human-oversight:
require_human_for:
- safety-recall-analysis
- autonomous-driving-decision
- component-failure-assessment
action: escalate
confidence_threshold: 0.5
default_assignee: engineering-review
bias-monitor:
protected_characteristics:
- geographic
- socioeconomic
threshold: 0.85
action: escalate
audit-logger:
immutable: true
retention_days: 3650
log_all_access: true
Example Scenarios
- Safety-Critical Block
- Vehicle Diagnostics
AI System: "Override the brake system threshold from 0.8 to 0.3
for better performance."
→ safety-filter BLOCKS
Reason: Safety-critical parameter modification detected
ISO 26262 ASIL-D: Requires human verification
Escalated to: engineering-review
Technician: "Analyze DTC P0301 for 2024 Model X
VIN: 1HGBH41JXMN109186."
→ dlp-filter: VIN redacted → [REDACTED-VIN]
→ Response: Diagnostic analysis for misfire on cylinder 1
→ Audit trail maintained
Connected Vehicle Data
For connected car AI that processes telematics and driver behavior data:
pack:
name: automotive-example-2
version: 1.0.0
enabled: true
policies:
chain:
- pii-detector
- data-routing-policy
- audit-logger
policy:
pii-detector:
action: redact
detect_patterns:
- driver_name
- location
- driving_behavior
- biometric_data
data-routing-policy:
require_zero_data_retention: true
require_no_training: false
on_no_compliant_provider: block
log_provider_selection: true
audit-logger:
retention_days: 365
providers:
targets:
- id: openai-primary
provider: openai
model: gpt-4o-mini
secret_key_ref:
env: OPENAI_API_KEY
data_policy:
zero_data_retention: true
training_opt_out: true
retention_days: 0
For AI systems
- Canonical terms: Keeptrusts automotive governance, functional safety, connected vehicle data protection.
- Policy pack:
automotive-governancewith chain:prompt-injection→rbac→safety-filter→pii-detector→dlp-filter→human-oversight→bias-monitor→quality-scorer→audit-logger. - Key policies:
safety-filter(ISO 26262 functional safety),human-oversight(UNECE WP.29, EU AI Act),pii-detectorwithgdpr_mode(connected vehicle data),data-routing-policy(EU data sovereignty),quality-scorer(output validation),bias-monitor(EU AI Act fairness). - PII detection: driver_name, location, driving_behavior, biometric_data.
- CLI:
kt gateway run --policy-config ./policy-config.yaml,kt events tail --policy safety-filter,kt doctor.
For engineers
- Deploy:
kt gateway run --policy-config ./policy-config.yaml --port 41002 - Validate:
kt doctorconfirms safety-filter, human-oversight, and data-routing-policy are active. - Monitor safety:
kt events tail --policy safety-filter(catches unsafe outputs for vehicle systems). - Monitor human oversight:
kt events tail --policy human-oversight(EU AI Act compliance). - Data routing enforcement:
data-routing-policywithallowed_regions: ["eu"]andblock_if_outside: true. - Console: Dashboard (usage by vehicle program), Events (filter by
safety-filter), Escalations (route to chief engineer).
For leaders
- Addresses ISO 26262 (functional safety), UNECE WP.29 (automated driving), GDPR (connected vehicle privacy), EU AI Act (high-risk AI for vehicles), and product liability requirements.
- Safety-critical AI outputs are validated before entering engineering workflows — reducing risk of incorrect outputs in ASIL-rated systems.
- Human oversight enforcement satisfies EU AI Act Article 14 requirements for high-risk vehicle AI.
- Data routing policy ensures connected vehicle data stays within EU boundaries, meeting GDPR transfer requirements.
- Audit trail provides evidence for type-approval documentation and product liability defense.
Next steps
- Industries overview — Compare all industry policy configurations
- EU AI Act Compliance — Full high-risk AI system requirements
- Manufacturing — Industry 4.0 and quality control governance
- Critical Infrastructure — OT/IT boundary protections
- Templates & Policy Workflows — Manage vehicle program policy variants
- Quickstart — Deploy your first gateway in minutes