Skip to content

EU AI Act Compliance

The EU AI Act is Europe’s landmark AI regulation, taking effect August 2026. It requires organizations using high-risk AI to conduct risk assessments, maintain audit trails, provide explanations to users, and notify authorities of serious incidents within 72 hours. TruthVouch automates compliance with all 37 articles.

What Is the EU AI Act?

The EU AI Act categorizes AI systems by risk level and assigns requirements accordingly:

Risk LevelDefinitionExamplesRequirements
UnacceptableAI poses unacceptable risk to safety or rightsSocial scoring, subliminal manipulationProhibited — cannot be deployed
High-RiskCould significantly harm safety, rights, or equal opportunitiesHiring tools, loan approval, facial recognitionRisk assessment, documentation, audit trail, explanation, human oversight
Limited-RiskLacks transparency to usersChatbots, content recommendationTransparency disclosures
Minimal-RiskNo meaningful riskSpam detection, spell checkerNo requirements

TruthVouch EU AI Act Coverage

Compliance AI covers 37 articles of the EU AI Act:

Article Categories

Prohibited Practices (Articles 5):

  • Subliminal manipulation
  • Exploiting vulnerabilities
  • Social scoring for public administration
  • Real-time biometric identification in public spaces (with narrow exceptions)

High-Risk Requirements (Articles 6-51):

  • Risk classification
  • Risk assessment documentation (Annex III)
  • Data governance
  • Technical documentation (Annex IV)
  • Bias and fairness testing
  • Human oversight mechanisms
  • Audit trails
  • Post-deployment monitoring
  • Incident reporting (Article 73)

Transparency Requirements (Article 52):

  • Notification when interacting with AI systems
  • Disclosure of AI-generated content (for high-risk systems)

Post-Market Surveillance (Article 72):

  • Ongoing monitoring of system performance
  • Reporting serious incidents to authorities

Article 73 Incident Reporting:

  • Notify authority within 72 hours of serious incidents
  • Documented incident response process

Risk Classification

The first step in EU AI Act compliance is classifying your AI systems. Compliance AI auto-classifies based on four risk factors:

1. System Type

  • Generative AI: Higher risk (potentially high-risk)
  • Recommendation engines: Medium-high risk
  • Biometric systems: High risk
  • Decision-making systems: High risk
  • Utility tools (spam detection, autocomplete): Minimal risk

2. Decision Scope

  • Autonomous decision: High-risk (no human involvement)
  • Assisted decision: Medium risk (human reviews result)
  • Informational only: Lower risk (humans make decision)

3. Affected Population

  • Children or vulnerable groups: Higher risk
  • General public: Medium risk
  • Internal use only: Lower risk

4. Data Sensitivity

  • Sensitive categories (health, biometric, racial, financial): High-risk
  • Limited personal data: Medium risk
  • Non-personal or pseudonymized: Lower risk

Auto-Classification Logic

Unacceptable Risk:
- Social scoring system for public services
- Real-time biometric identification in public spaces (except law enforcement exceptions)
- Subliminal/manipulation systems
High-Risk:
- Autonomous hiring or promotion decisions
- Autonomous financial decisions (loans, insurance, benefits)
- Biometric systems (facial recognition, fingerprint, iris)
- Content moderation at scale
- Educational system performance evaluation
- Emotion recognition systems in safety-critical contexts
Limited-Risk:
- Chatbots and content generation
- Recommendation engines
- Spam detection
- Content moderation (human-in-the-loop)
Minimal-Risk:
- Spell checker
- Syntax highlighting
- Non-decision informational tools

Compliance Roadmap by Risk Level

High-Risk Systems: Full Compliance Checklist

  1. Risk Assessment (Annex III)

    • AI system description
    • Intended purpose and foreseeable misuse
    • Identification of risks to health, safety, rights
    • Evaluation of likelihood and severity
    • Existing and proposed safeguards
    • TruthVouch: Generates via DPIA & Algorithmic Assessment
  2. Technical Documentation (Annex IV)

    • Training dataset description
    • Performance metrics (accuracy, precision, recall, F1)
    • Bias and fairness testing results
    • Robustness and adversarial testing
    • Explainability methodology
    • Model card
    • TruthVouch: Auto-generates from system profile
  3. Data Governance

    • Data quality assurance procedures
    • Bias detection and mitigation
    • Training data documentation
    • TruthVouch: Linked to your data connectors
  4. Audit Trail

    • Immutable log of system outputs and decisions
    • User interactions
    • System modifications
    • Incident reports
    • TruthVouch: Auto-collected via infrastructure connectors
  5. Human Oversight

    • Clear responsibility for human review
    • Tools for humans to understand decisions
    • Process for humans to override/reject automated decisions
    • TruthVouch: Policy definition and enforcement
  6. Post-Market Monitoring

    • Plan for continuous monitoring
    • Procedures to detect performance degradation
    • Trigger points for retraining or redeployment
    • TruthVouch: Tracks via infrastructure connectors and incident reporting
  7. Incident Reporting (Article 73)

    • 72-hour notification to authority
    • Documented serious incident process
    • TruthVouch: Pre-built playbook + authority notification dispatch

Limited-Risk Systems: Transparency Requirements

Required for all AI systems except high-risk and minimal-risk:

  1. User Notification

    • Clearly disclose that AI system generated content
    • Example: “This response was generated by AI”
  2. AI-Generated Content Labeling

    • Mark deep fakes, synthetic media, AI-generated text
    • Provide explanation of how content was created
  3. Compliance AI Support:

    • Notifications: Template library for user disclosures
    • Content Labeling: Integration with C2PA (Coalition for Content Provenance and Authenticity) for tamper-proof AI content labels

Minimal-Risk Systems: No Requirements

No compliance burden. These include:

  • Spell checkers
  • Syntax highlighting
  • Email spam filters
  • Standard search result ranking (unless personalized)

Key Features: Article 73 Incident Reporting

What Is a “Serious Incident”?

An incident that has or could have:

  • Caused death or serious injury
  • Caused significant harm to health, environment, or critical infrastructure
  • Violated fundamental rights or freedoms
  • Resulted in significant economic loss

Reporting Process

  1. Incident occurs → Auto-trigger incident management workflow
  2. Assess severity → Is it serious per Article 73?
  3. If serious:
    • Document incident in timeline
    • Assign to incident response team
    • Start 72-hour clock
  4. 72-hour deadline:
    • TruthVouch alerts when deadline approaches
    • Auto-draft notification to EU authority (DPA, national regulator)
    • Interim report if full investigation incomplete
  5. Follow-up reporting:
    • Detailed findings within 15 days
    • Root cause analysis
    • Corrective actions
  6. Close incident → Archive documentation

TruthVouch Features:

  • Article 73 Playbook — Pre-filled template for incident type
  • Authority Notification Dispatch — Auto-generates and tracks notifications
  • 72-hour Deadline Alert — Automated reminders
  • Timeline View — Chronological incident record
  • Evidence Attachment — Link investigation records, logs, fixes

Annex IV Technical Documentation

High-risk systems must maintain Annex IV documentation:

Contents

SectionWhat to DocumentExamples
System DescriptionWhat the system does, version, intended use”Customer service chatbot v2.1, deployed to website”
Training DataDataset size, composition, quality checks”1M conversation pairs, 98% accuracy on test set”
Performance MetricsAccuracy, precision, recall, F1, ROC-AUC”Accuracy: 92%, Precision: 89%, Recall: 94%“
Bias & FairnessDisparate impact analysis by demographics”Gender gap: 1.2%, Age gap: 0.8%“
ExplainabilityHow decisions are explained to users”SHAP values, feature importance”
RobustnessAdversarial testing, edge cases”Tested on 50K adversarial examples”
Deployment & MonitoringWhere system runs, monitoring plan”AWS, daily performance checks”
ModificationsVersion history, changes”Retrained monthly, last update 2026-02-15”

TruthVouch: Auto-generates PDF Annex IV from your system profile and model card data.

Biased Against Bias: Testing & Monitoring

EU AI Act Annex III requires fairness assessment. Compliance AI auto-runs tests:

Tests Included

  1. Demographic Parity — Does system treat groups equally on average?
  2. Equalized Odds — Do error rates match across groups?
  3. Disparate Impact — Is 80% rule violated (4/5ths rule)?
  4. Predictive Parity — Are predictions equally accurate across groups?

Example: Hiring Tool

Test if recommendation rate differs by gender:

  • Male applicants: 45% recommended
  • Female applicants: 42% recommended
  • Disparate impact: 93.3% (rule of 80 is met)
  • Conclusion: No significant gender bias detected

Monitoring: Compliance AI watches for bias drift post-deployment and alerts if test results change.

Conformity Assessment Routes

For high-risk systems, you have two conformity assessment options:

Route 1: Internal Assessment

  1. Conduct risk assessment (Annex III)
  2. Create technical documentation (Annex IV)
  3. Test for bias and fairness
  4. Document audit trail
  5. Implement human oversight
  6. Set up post-deployment monitoring
  7. File declaration of conformity

TruthVouch Support: Auto-generates assessments, documentation, testing results

Route 2: Third-Party Audit

  1. Same as Route 1, plus
  2. Hire notified body (accredited auditor)
  3. Auditor reviews documentation
  4. Issues conformity certificate
  5. File with declaration of conformity

TruthVouch Support: Generates audit-ready documentation; integrates with auditor tools (OSCAL export, evidence mapping)

Prohibited AI Systems

The EU AI Act bans certain AI systems entirely. Compliance AI flags if your system falls into prohibited categories:

Prohibited Categories (Article 5)

ProhibitionWhyExample
Subliminal ManipulationBypasses conscious decision-makingHidden persuasion, microtargeted ads
Exploitation of VulnerabilitiesTargets individuals by age, disability, mental illnessPredatory targeting of children or elderly
Social ScoringPublic sector automated social evaluationDenying services based on AI-computed “trustworthiness”
Real-Time Biometric ID in PublicPrivacy violationFacial recognition in public squares (limited law enforcement exceptions exist)

TruthVouch Action: If your system matches a prohibited category, you’ll see a STOP alert. The system cannot be deployed in the EU until redesigned.

Transition: August 2026 Launch

The EU AI Act takes effect August 2, 2026. Rules phase in by risk level:

PhaseDateWhat Applies
Prohibited PracticesAug 2, 2025 (NOW!)Social scoring, subliminal manipulation banned
Governance RequirementsAug 2, 2026High-risk systems must meet all requirements
Transparency RequirementsAug 2, 2026Limited-risk systems must disclose
Notified BodiesAug 2, 2026Conformity assessment bodies available

Compliance Deadline: If you have high-risk systems in the EU, you must be compliant by August 2026. Start now.

Next Steps