Skip to content

Comply with EU AI Act

The EU AI Act is now enforceable. If you sell to EU customers or employ people in the EU, you must prove compliance. This guide shows you how to map your AI systems to the EU AI Act, generate mandatory documentation, and prove compliance in under an hour.

Overview

The EU AI Act defines requirements for AI systems based on risk level:

  • Prohibited AI (Articles 5): Systems that violate fundamental rights (real-time biometric mass surveillance, emotion recognition in law enforcement, social scoring)
  • High-Risk AI (Articles 6-50): Systems that significantly impact people’s rights (hiring, benefits assessment, criminal risk evaluation, credit decisions). These need mandatory risk assessments, data documentation, transparency, and human oversight
  • General Purpose AI (Articles 51-71): Large language models like ChatGPT and Claude that can be used for many purposes

Key compliance requirements:

  • Article 35: Data Protection Impact Assessment (DPIA) for High-Risk systems
  • Article 50-51: Mandatory technical documentation (Annex IV)
  • Article 73: Incident notification when a High-Risk system causes substantial harm
  • Article 33: Cooperation with authorities on suspected violations

TruthVouch automates all of this:

  1. Auto-discovers your AI systems
  2. Classifies them by risk level
  3. Maps to applicable EU AI Act articles
  4. Generates DPIAs and technical documentation
  5. Tracks Article 73 incidents
  6. Handles authority notifications

Prerequisites

  1. TruthVouch account with Professional tier or higher (Compliance AI included)
  2. EU AI Act framework enabled (automatic if you select EU region during signup)
  3. Basic info about your AI systems (what they do, who uses them)

If you haven’t set these up, start with For Compliance Officers.

Step 1: Auto-Discover AI Systems

TruthVouch finds all AI systems you’re using, including shadow AI you might not know about.

  1. Go to Compliance → AI Systems → Discovery

  2. TruthVouch connects to your cloud providers and ITSM systems:

    • AWS — SageMaker models, Bedrock models, Lambda functions using ML
    • Azure — Cognitive Services, Machine Learning models, OpenAI deployments
    • Google Cloud — Vertex AI models, BigQuery ML, document AI
    • GitHub — Copilot usage, Actions workflows
    • Datadog — ML-based anomaly detection and forecasting
    • ServiceNow — AI workflows and automation
    • Jira — AI-assisted issue classification
  3. Review the discovered systems:

    • Mark ones you actually own/govern (vs. SaaS provider tools)
    • Add any self-hosted models not auto-detected
    • Add any third-party AI vendors (consultant AI, vendor AI solutions)

You’ll typically discover 3-5× more AI systems than your team can initially list.

Step 2: Classify by Risk Level

For each discovered system, determine its risk category under the EU AI Act.

  1. Go to Compliance → EU AI Act → Risk Classification

  2. For each system, answer:

    • Does it process biometric data? (Yes = potential Prohibited or High-Risk)
    • Does it impact employment decisions? (Hiring, promotion, termination = High-Risk)
    • Does it impact financial services? (Credit, insurance, benefits = High-Risk)
    • Does it impact legal proceedings? (Criminal risk, parole = High-Risk)
    • Does it impact public services? (Welfare, essential services = High-Risk)
    • Is it a large language model? (GPT, Claude, Gemini = General Purpose)
  3. TruthVouch auto-classifies systems:

    • Prohibited: Block immediately
    • High-Risk: Requires mandatory documentation and oversight
    • General Purpose: Requires transparency and GDPR compliance
    • Minimal Risk: Basic documentation sufficient

Important: If you find any Prohibited systems (real-time biometric surveillance, emotion recognition in law enforcement), you must stop using them immediately.

Step 3: Create DPIAs (Data Protection Impact Assessments)

For High-Risk systems, you must complete a Data Protection Impact Assessment (Article 35, GDPR / Article 29, EU AI Act).

  1. Go to Compliance → EU AI Act → DPIAs

  2. Click New DPIA

  3. Select the High-Risk system

  4. TruthVouch auto-generates a DPIA template with your system’s details

  5. Complete the DPIA sections:

    • System description: What it does, who operates it, who it affects
    • Data processing: What data does it use? Where does it come from?
    • Necessity and proportionality: Is using AI necessary? Could you use non-AI alternatives?
    • Risk assessment: What could go wrong? (Discrimination, incorrect decisions, data breaches)
    • Mitigation measures: How will you prevent or minimize risks?
    • DPO sign-off: Your Data Protection Officer reviews and approves
  6. Export as PDF for your compliance file

Time to complete: 1-2 hours per High-Risk system (TruthVouch provides templates and guidance).

Step 4: Generate Annex IV Documentation

For High-Risk systems, the EU AI Act requires mandatory technical documentation (Annex IV).

  1. Go to Compliance → EU AI Act → Technical Documentation

  2. Click Generate Annex IV

  3. Select your High-Risk system

  4. TruthVouch auto-generates documentation covering:

    • System overview: Name, purpose, operator, developer
    • Version history: All versions and changes
    • Data documentation: Training data, validation data, test data sources and characteristics
    • Model card: Architecture, performance metrics, limitations, uncertainty measures
    • Risk management: Identified risks, mitigation strategies, residual risks
    • Testing and validation: Test procedures, test results, continuous monitoring
    • Human oversight: How humans are kept in the loop
    • Instructions for use: How to correctly use the system, known limitations
    • Safety measures: How to handle errors or harmful outputs
  5. Review each section (much is auto-generated, some needs your input)

  6. Export as PDF

This documentation is required if regulators ask for it (Article 50) or if you’re selling to EU customers.

Step 5: Set Up Article 73 Incident Tracking

Article 73 requires you to notify authorities within 30 days if a High-Risk AI system causes substantial harm (physical injury, death, discrimination, breach of fundamental rights).

  1. Go to Compliance → EU AI Act → Incident Management

  2. Create an Incident Response Workflow:

    • When an incident occurs, create an incident record with:
      • What happened (description of harm)
      • Which system caused it
      • How many people affected
      • Severity level
    • TruthVouch auto-generates an Article 73 notification draft
    • Your legal team reviews and submits to authorities
  3. Set up Deadline Alerts:

    • 30-day deadline for authority notification (EU AI Act Article 73)
    • Alerts at 21 days, 14 days, 7 days, and 3 days before deadline
    • Assign ownership (usually your Compliance Officer or Legal team)
  4. Optional: Enable Auto-Draft for High-Risk Thresholds

    • If a High-Risk system produces a decision that could cause substantial harm (e.g., denying someone credit), TruthVouch can auto-draft an incident report for your review

Note: This is for serious incidents causing substantial harm. Normal errors don’t require Article 73 notification.

Step 6: GDPR Alignment (Overlapping Requirements)

The EU AI Act overlaps with GDPR for systems that process personal data.

  1. Go to Compliance → GDPR

  2. For each High-Risk AI system using personal data:

    • Article 22 rights: Right to explanation and human review for automated decisions
    • Article 35 DPIAs: Already covered in Step 3 above
    • Article 33 breach notification: 72-hour deadline if personal data is breached
  3. Create Processing Agreements if:

    • You contract with a third party to operate the AI system
    • You use cloud AI services (AWS, Azure, OpenAI)
    • You share data with an AI vendor

TruthVouch tracks GDPR requirements alongside EU AI Act requirements in one dashboard.

Step 7: Generate Compliance Reports for Regulators

When auditors or regulators ask, you have proof of compliance in under 5 minutes.

  1. Go to Compliance → Reports → EU AI Act Report
  2. Generate:
    • Systems Inventory: All High-Risk and Prohibited systems identified
    • Risk Classification: How each system was classified
    • Documentation Status: Which systems have DPIAs and Annex IV docs
    • Incident Log: All reported incidents and their resolution
    • Authority Notifications: Records of all Article 73 submissions
    • Timeline: When documentation was created and updated
  3. Export as PDF (auditor-friendly) or NDJSON (for SIEM systems)

Regulators will see:

  • You know what AI systems you’re operating
  • You’ve assessed their risks
  • You’ve prepared mandatory documentation
  • You have incident management processes
  • You’re actively monitoring compliance

Real-World Example

Scenario: You run a hiring platform using AI to screen resumes. Your HR department notices the system is rejecting proportionally more women than men.

  1. Detect the issue: HR reports low pass rate for women candidates

  2. Classify the impact: High-Risk (employment decision) + potential discrimination (gender bias)

  3. Create incident record:

    • System: Resume Screening AI
    • Issue: Potential gender bias in decisions
    • Affected: 1,200 candidates screened over 3 months
    • Severity: High
  4. Auto-generate Article 73 notification:

    • TruthVouch shows template with your system details
    • Legal team reviews, confirms gender bias investigation is underway
    • Submits to authorities within 30 days
  5. Mitigate:

    • Review training data for gender imbalance
    • Retrain model with balanced data
    • Add human review for all decisions below 95% confidence
    • Update Annex IV documentation with new safeguards
  6. Follow up:

    • After mitigation, re-audit decision distribution
    • Document the fix in your compliance file
    • Report back to authorities that issue is resolved

Checklist: EU AI Act Compliance

ItemStatusOwnerDeadline
AI Systems InventoryDiscoveredCompliance OfficerDone
Risk ClassificationCompletedCompliance OfficerDone
Prohibited Systems CheckNone foundCompliance OfficerDone
DPIA for High-Risk Systems2 completedLegal/ComplianceWeek 2
Annex IV Docs for High-Risk2 completedEngineering + ComplianceWeek 3
Article 73 Incident ProcessConfiguredCompliance OfficerWeek 2
GDPR Article 22 (Explanations)ImplementedEngineeringWeek 4
Regular AuditsScheduledCompliance OfficerMonthly

Key Dates & Deadlines

  • April 2025: EU AI Act enforcement begins for Prohibited AI (real-time biometric)
  • July 2025: High-Risk AI requirements become enforceable
  • February 2026: General Purpose AI transparency requirements
  • Rolling: Article 73 incidents must be reported within 30 days of discovery

Stay compliant by:

  • Running compliance scans monthly
  • Updating DPIAs when systems change
  • Tracking incident deadlines
  • Documenting all changes to High-Risk systems

Next Steps

Questions? Contact your Compliance Success Manager or reach out to compliance@truthvouch.ai.