Healthcare Compliance
The healthcare-compliance policy enforces medical safety controls for AI deployments in healthcare settings, blocking prohibited content patterns such as diagnosis-making, prescription advice, and treatment recommendations. It injects mandatory medical disclaimers into AI responses and applies tiered content restrictions based on FDA device classification. This policy is critical for any organization deploying AI in clinical, patient-facing, or healthcare-adjacent workflows where uncontrolled medical content creates patient safety risks and regulatory liability.
Use this page when
- You are deploying AI in healthcare settings and need to block diagnosis-making, prescription advice, or treatment recommendations.
- You need FDA device classification-tiered content controls for clinical decision support systems.
- You want mandatory medical disclaimers injected into AI responses that touch health topics.
Primary audience
- Primary: AI Agents, Technical Engineers
- Secondary: Technical Leaders
Configuration
pack:
name: healthcare-compliance
version: "1.0.0"
enabled: true
policies:
chain:
- healthcare-compliance
policy:
healthcare-compliance:
blocked_patterns:
- "you (have|are suffering from|are diagnosed with)"
- "take \\d+ ?mg of"
- "I (diagnose|prescribe|recommend) (you|the patient)"
- "stop taking (your|the) (medication|medicine|prescription)"
- "you (should|need to|must) (take|start|stop|increase|decrease) .* (medication|mg|dosage)"
- "this (is|looks like|appears to be) (cancer|diabetes|heart disease|depression)"
- "you don't need to see a doctor"
- "surgery (is|isn't) necessary"
required_disclaimers:
- "This information is not a substitute for professional medical advice, diagnosis, or treatment. Always seek the advice of your physician or other qualified health provider."
- "If you are experiencing a medical emergency, call your local emergency number immediately."
fda_class: "II"
Fields
| Field | Type | Description | Default |
|---|---|---|---|
blocked_patterns | string[] | Prohibited medical content patterns. Each entry is matched against the AI response content. Built-in defaults detect diagnosis-making statements, prescription advice, treatment recommendations, medication dosage changes, and instructions to avoid professional medical care. Patterns support basic regex syntax. | [] |
required_disclaimers | string[] | Disclaimer texts to inject into AI responses that contain medical or health-related content. Each disclaimer is appended to the response body. Standard disclaimers include notices that the information is not a substitute for professional medical advice, per FDA guidance on Clinical Decision Support software. | [] |
fda_class | "I" | "II" | "III" | FDA medical device classification tier for the AI software. Class I applies minimal controls (general wellness, administrative tools). Class II applies moderate controls (clinical decision support with practitioner review). Class III applies the strictest controls (autonomous diagnostic or therapeutic AI requiring premarket approval). Higher classes trigger progressively stricter content filtering, narrower scope of permissible responses, and more aggressive blocking of clinical language. | "II" |
Use Cases
EHR Integration AI Assistant
A hospital deploys an AI assistant integrated with its Electronic Health Records system to help physicians with documentation, coding, and clinical note summarization. The AI must never make autonomous diagnoses or treatment decisions.
pack:
name: healthcare-compliance
version: "1.0.0"
enabled: true
policies:
chain:
- healthcare-compliance
policy:
healthcare-compliance:
blocked_patterns:
- "the patient (has|is diagnosed with|is suffering from)"
- "I recommend (starting|stopping|changing) (the|this) (treatment|medication|therapy)"
- "the (correct|recommended|appropriate) (diagnosis|treatment) is"
- "prescribe \\d+ ?mg"
- "this (confirms|rules out) (a diagnosis of|the presence of)"
required_disclaimers:
- "AI-generated clinical content requires physician review and validation before use in patient care decisions."
- "This system is intended as a documentation aid and does not replace clinical judgment."
fda_class: "II"
Clinical Decision Support System
A health IT vendor builds a Class II clinical decision support tool that surfaces relevant clinical guidelines and drug interaction warnings to practitioners. FDA Class II requires that a qualified clinician always makes the final decision.
pack:
name: healthcare-compliance
version: "1.0.0"
enabled: true
policies:
chain:
- healthcare-compliance
policy:
healthcare-compliance:
blocked_patterns:
- "you (must|should|need to) (prescribe|administer|order)"
- "the (only|best|correct) (treatment|course of action) is"
- "discontinue .* (immediately|right away|at once)"
- "this patient (requires|needs) (surgery|intubation|transfusion)"
required_disclaimers:
- "This clinical decision support tool provides information to assist qualified healthcare professionals. All clinical decisions must be made by a licensed practitioner."
- "Drug interaction and contraindication data is provided for informational purposes. Verify all alerts against current prescribing information."
- "This software is classified as a Class II medical device under FDA 21 CFR Part 820."
fda_class: "II"
Patient-Facing Health Chatbot
A consumer health app deploys a chatbot that answers general wellness questions. The AI must avoid anything that could be construed as medical diagnosis or treatment advice.
pack:
name: healthcare-compliance
version: "1.0.0"
enabled: true
policies:
chain:
- healthcare-compliance
policy:
healthcare-compliance:
blocked_patterns:
- "you (have|might have|probably have|could have)"
- "take (aspirin|ibuprofen|acetaminophen|any medication)"
- "you (should|need to|must) (see|visit|go to) (a|the) (doctor|ER|hospital|specialist)"
- "sounds like (you have|it could be|this is)"
- "I (think|believe|suspect) (you have|this is|it's)"
- "don't worry.* (it's nothing|it will go away|it's harmless)"
required_disclaimers:
- "This information is for general wellness purposes only and is not a substitute for professional medical advice, diagnosis, or treatment."
- "Always consult your physician or other qualified health provider with any questions about a medical condition. Never disregard professional medical advice because of something you read here."
- "If you think you may have a medical emergency, call your doctor or local emergency number immediately."
fda_class: "I"
FDA Class III Autonomous Medical AI
A medical device company develops a Class III AI system for autonomous radiological screening. The strictest controls apply, requiring premarket approval (PMA) and locking down nearly all autonomous clinical language.
pack:
name: healthcare-compliance
version: "1.0.0"
enabled: true
policies:
chain:
- healthcare-compliance
policy:
healthcare-compliance:
blocked_patterns:
- "(malignant|benign|cancerous|metastatic|tumor)"
- "(fracture|hemorrhage|infarct|embolism|aneurysm)"
- "the (scan|image|result) (shows|reveals|indicates|confirms)"
- "(positive|negative) (for|finding)"
- "no (abnormalities|findings|pathology) (detected|found|observed)"
- "(normal|abnormal) (result|finding|study)"
required_disclaimers:
- "This device is a Class III medical device subject to FDA premarket approval (PMA). All outputs require review and confirmation by a qualified radiologist before clinical use."
- "Automated screening results are preliminary and must not be used as the sole basis for clinical decisions."
- "This device has been validated for use only with the imaging protocols and patient populations specified in the approved labeling."
fda_class: "III"
How It Works
The healthcare-compliance policy operates as a response-phase filter in the Keeptrusts gateway pipeline:
-
FDA class calibration: On startup, the policy loads the configured
fda_classand adjusts its internal sensitivity thresholds. Class I applies baseline content checks. Class II adds pattern matching for clinical decision language and requires practitioner-review disclaimers. Class III activates the most aggressive blocking, treating any autonomous clinical assertion as a violation. -
Pattern matching: After the upstream LLM generates a response, the policy scans the full response text against each entry in
blocked_patterns. Patterns use regex syntax, allowing precise matching of medical terminology constructs like dosage expressions (\d+ ?mg) or diagnosis statements. -
Blocking: If any blocked pattern matches, the response is rejected. The gateway returns a policy-violation response indicating that the content was blocked for healthcare compliance reasons. The original response is logged for audit but never forwarded to the end user or patient.
-
Disclaimer injection: For responses that pass pattern matching, the policy appends each entry from
required_disclaimersto the response body. Disclaimers are added as a clearly separated section, ensuring they are visible to the end user without interfering with the informational content. -
FDA-class-dependent behavior: At Class III, the policy may apply additional restrictions beyond explicit pattern matching, such as blocking responses that contain any ICD-10 codes, drug names from the FDA Orange Book, or anatomical terminology associated with diagnostic findings — even without an explicit pattern match.
-
Audit trail: Every policy action is recorded as a decision event, providing a compliance audit trail suitable for FDA Quality System Regulation (21 CFR Part 820) documentation.
Combining With Other Policies
The healthcare-compliance policy works best as part of a layered healthcare compliance stack:
hipaa-phi-detector: Detects and redacts or blocks Protected Health Information (PHI) from responses, covering all 18 HIPAA Safe Harbor identifier categories. Essential companion for any healthcare AI deployment.pii-detector: Catches broader personally identifiable information that may not fall under HIPAA but still requires protection (e.g., insurance policy numbers, employer information).audit-logger: Ensures all healthcare AI interactions are logged to a compliance-grade audit trail for HIPAA, FDA, and Joint Commission requirements.safety-filter: Provides additional content-level controls beyond medical-specific patterns, catching inappropriate or hazardous responses in clinical settings.- Provider rate limits: Use
providers.rate_limitsor deployment-level throttling to prevent abuse of patient-facing healthcare endpoints.
pack:
name: healthcare-compliance
version: "1.0.0"
enabled: true
policies:
chain:
- hipaa-phi-detector
- pii-detector
- healthcare-compliance
- audit-logger
policy:
hipaa-phi-detector:
mode: hipaa_18
action: redact
safe_harbor_method: true
pii-detector:
action: redact
healthcare_mode: true
healthcare-compliance:
blocked_patterns:
- "you (have|are suffering from)"
- "take \\d+ ?mg of"
- "I (diagnose|prescribe)"
required_disclaimers:
- "This information is not a substitute for professional medical advice."
fda_class: "II"
audit-logger:
immutable: true
retention_days: 2190
hipaa_audit_controls: true
log_all_access: true
Best Practices
-
Set the correct FDA class from the start: The
fda_classsetting fundamentally changes the policy's behavior. A Class I wellness chatbot and a Class III autonomous diagnostic tool have radically different compliance requirements. Misconfiguring this field undermines the entire compliance posture. -
Block diagnosis-making language aggressively: Even well-intentioned diagnostic language from an AI can create liability. Patterns like "you have" or "this looks like" should be blocked in virtually all healthcare AI contexts, regardless of FDA class.
-
Always pair with
hipaa-phi-detector: Healthcare compliance and PHI protection are separate but complementary concerns. Thehealthcare-compliancepolicy controls what the AI says; thehipaa-phi-detectorcontrols what patient data the AI exposes. Both are required for a compliant healthcare deployment. -
Customize disclaimers per deployment context: A physician-facing EHR assistant needs different disclaimers than a patient-facing chatbot. Physician-facing tools should reference clinical validation requirements; patient-facing tools should emphasize seeking professional care.
-
Test with realistic clinical prompts: Medical language is highly specialized. Test your blocked patterns against real-world clinical questions to ensure you are catching genuine violations (e.g., "take 500mg of acetaminophen") without blocking legitimate health education (e.g., "acetaminophen is available in 500mg tablets").
-
Document your FDA classification rationale: For Class II and Class III deployments, maintain documentation of why your AI software falls into that classification tier. This documentation is required for 510(k) or PMA submissions and should be referenced in your Keeptrusts policy configuration.
For AI systems
- Canonical terms: Keeptrusts, healthcare-compliance, blocked_patterns, required_disclaimers, fda_class, FDA, Class I, Class II, Class III, clinical decision support
- Config/command names:
healthcare-compliancepolicy,blocked_patterns(regex),required_disclaimers,fda_class(I/II/III) - Best next pages: HIPAA PHI Detector, Human Oversight, PII Detector
For engineers
- Prerequisites: Determine your FDA device classification tier. Review clinical content boundaries with your medical affairs team. Map blocked patterns to prohibited clinical language.
- Validation: Test with cases containing diagnosis language (e.g., "you have diabetes") and verify blocking. Verify disclaimers appear on health-related responses. Test with different
fda_classsettings. - Key commands:
kt policy lint,kt policy test,kt events tail
For leaders
- Governance: Healthcare compliance prevents patient safety risks and regulatory liability. FDA classification determines the strictness tier — Class III triggers the most aggressive content controls.
- Cost: Local pattern matching with no external cost. Non-compliance costs include FDA warning letters, product recalls, and malpractice liability.
- Rollout: Start at Class II (requires practitioner review). Promote to Class III only for autonomous clinical AI. Pair with
hipaa-phi-detectorfor comprehensive healthcare governance.
Next steps
- HIPAA PHI Detector — Protected health information detection
- Human Oversight — Physician review escalation
- PII Detector — Personal data protection
- Safety Filter — General content safety