EU AI Act Interactive Guide

Getting to Grips with the EU AI Act

The EU's AI Act is here, and it's a big deal for anyone working with artificial intelligence. It's all about making AI safe, fair, and trustworthy. Sounds good, right? But let's be honest, new regulations can be a bit of a maze.

You've probably seen the AI Act described as a pyramid: some AI is banned (unacceptable risk), some is high-risk with lots of rules, then limited risk with transparency duties, and finally, minimal risk with fewer worries. That's a helpful starting point, but the truth is, it's not always that neat. Sometimes, an AI system might tick boxes in more than one category, or involve complex components like General Purpose AI models that have their own specific rules. It's like a puzzle where the pieces can overlap.

This guide is designed to help you make sense of it all. We'll break down the key ideas and point you to our assessment tool so you can start figuring out where your AI systems fit in.

First Things First: Is It an "AI System"?

Before diving into risks, you need to know if what you're working with is even considered an "AI system" by the Act. Article 3(1) defines it as: "a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions”.

In simpler terms, look for:

  • It's machine-based (hardware/software).
  • It has some autonomy (operates without constant human command).
  • It infers outputs (like predictions or decisions) from inputs.
  • It might be adaptive (learns over time), but this isn't a must.

The "intended purpose" – what you plan to use the AI for – is key for figuring out its risk level. Our can help you with this.

Keeping Track: Your AI Asset Inventory

Think of an AI asset inventory as your central hub for AI Act compliance. It's not just a list; it's a living record that helps you manage risks and obligations.

Key Steps to Manage Your Inventory:

  • List all your potential AI systems.
  • Check if they fit the Act's definition of an "AI System."
  • Clearly define the "Intended Purpose(s)" for each.
  • Classify each system (Prohibited, High, Limited, Minimal Risk) – our is designed for this.
  • Document why you classified it that way and what rules apply.
  • Review and update regularly, especially if things change.

For more on risk levels and specific duties, check out the and .

Ready to dive deeper? Use the tabs above to explore specific topics or head straight to the assessment tool.

Guide: Roles & Responsibilities under the EU AI Act

The EU AI Act defines specific roles for entities involved in the AI lifecycle. Understanding your role is crucial for identifying applicable obligations.

Provider

Definition (Art. 3(2)): Develops an AI system or has one developed and places it on the market/service under its own name/trademark.

Key Responsibilities:

  • Ensuring compliance, conformity assessments, technical documentation, QMS, registration, PMS.

Deployer

Definition (Art. 3(4)): Uses an AI system under its authority (except personal non-professional use).

Key Responsibilities (High-Risk AI - Art. 26):

  • Use per instructions, ensure relevant input data, monitor, keep logs, human oversight, potential FRIA.

Importer

Definition (Art. 3(5)): Places an AI system from outside the EU on the market.

Key Responsibilities (Art. 23):

  • Ensure non-EU provider completed conformity & docs, verify CE mark, indicate contact details.

Distributor

Definition (Art. 3(6)): Makes an AI system available on the market (not provider/importer).

Key Responsibilities (Art. 24):

  • Verify CE mark & docs, ensure provider/importer compliance, don't make non-conforming systems available.

Note: An entity can fulfill multiple roles. Obligations depend on activities performed.

Tool: Full AI System Assessment

This questionnaire will guide you through a series of questions to help determine a preliminary classification for your AI system under the EU AI Act. This is not legal advice but an informational tool.

Step 0: Your Role

0.1. What is your primary role concerning this AI system?

Step 1: AI System Definition (Art 3(1))

1.1. Is the system machine-based?

1.2. Does it operate with autonomy?

1.3. Does it infer outputs?

Step 2: Prohibited AI Practices (Art 5)

Does your AI system involve any of the following? (Select all that apply)

Step 3: High-Risk Pathway Identification (Art 6)

3.1. Is the AI system an Annex I product/safety component requiring 3rd party assessment?

3.2. Is the AI system's intended purpose listed in Annex III?

This is an indicative list. Refer to the official EU AI Act for the definitive text.

  • Biometric identification and categorisation.
  • Management/operation of critical infrastructure.
  • Education and vocational training.
  • Employment, workers management, access to self-employment.
  • Access to essential private/public services and benefits.
  • Law enforcement.
  • Migration, asylum, and border control management.
  • Administration of justice and democratic processes.

Step 4 (For Annex III Systems): Profiling Check

4.1. Does your Annex III system perform "profiling of natural persons"?

If yes, it's High-Risk (derogation N/A).

Step 4 (Annex III, No Profiling): Significant Harm Check

4.2. Does the system pose a significant risk of harm to health, safety, or fundamental rights?

If yes, likely High-Risk. If no, check derogation conditions.

Step 4 (Annex III, No Profiling/Harm): Derogation Conditions

Does it fulfill AT LEAST ONE condition for derogation?

If yes (and no profiling/harm), derogation MAY apply. Else, High-Risk.

Step 5: Limited Risk AI (Art 50 Transparency)

Does it involve any of these functionalities? (Select all that apply)

Guide: EU AI Act Risk Categories

The Act categorizes AI systems into four main risk levels, with obligations proportionate to the risk. Understanding these categories is crucial for compliance.

Illustrative conceptual distribution of AI systems by risk. Actual distribution varies.

Prohibited AI (Unacceptable Risk)

Practices posing a clear threat to safety, livelihoods, and fundamental rights (Article 5). These are banned.

Inventory State: Prohibited - Cease/Prevent Deployment

High-Risk AI

Systems that can adversely impact safety or fundamental rights (Article 6). Permitted but subject to stringent requirements.

Inventory State: Potentially High-Risk

Limited Risk AI

Systems posing specific transparency risks (Article 50). Users must be aware of AI interaction or AI-generated content.

Inventory State: Limited Risk - Transparency Obligations Apply

Minimal or No Risk AI

Systems not falling into Prohibited, High, or Limited risk categories. Generally permitted without additional specific AI Act obligations.

Inventory State: Minimal Risk - Monitor

Guide: High-Risk AI System Obligations (Provider Focus)

Providers of High-Risk AI systems face significant obligations under Chapter III of the EU AI Act. This section outlines key requirements. Deployers also have specific duties (Art. 26), detailed further in the "Roles & Responsibilities" guide.

Risk Management System (Art. 9)

Establish, implement, document, and maintain a continuous, iterative risk management system throughout the AI system's entire lifecycle. This includes identification, estimation, evaluation of risks, and adoption of risk mitigation measures.

Data and Data Governance (Art. 10)

Implement appropriate data governance and management practices for training, validation, and testing data sets. Ensure data is relevant, representative, free of errors, complete, and has the appropriate statistical properties. Address possible biases.

Technical Documentation (Art. 11 & Annex IV)

Prepare and maintain extensive technical documentation before market placement. This must demonstrate compliance and include system description, capabilities, limitations, algorithms, data, training/testing/validation procedures, risk management, instructions for use, and PMS plan.

Record-Keeping (Logs) (Art. 12)

Design systems for automatic, traceable, secure, and robust logging of operations relevant for risk identification, monitoring, and post-market surveillance.

Transparency & Info to Deployers (Art. 13)

Provide clear, concise, and comprehensive instructions for use, covering identity, intended purpose, capabilities, limitations, accuracy, robustness, cybersecurity, human oversight measures, computational resources, expected lifetime, and maintenance.

Human Oversight (Art. 14)

Design systems for effective human oversight, allowing intervention, disregard, or override of outputs. Measures should be proportionate to risks.

Accuracy, Robustness, and Cybersecurity (Art. 15)

Ensure appropriate levels throughout the lifecycle. Systems must be resilient to errors, faults, inconsistencies, and attempts to alter use or behavior. Implement cybersecurity measures.

Quality Management System (Art. 17)

Implement a QMS ensuring compliance with the Act, covering strategy, design controls, testing, post-market monitoring, incident reporting, and accountability.

Conformity Assessment (Art. 43)

Undergo the relevant conformity assessment procedure before market placement and for substantial modifications. This may involve internal controls or a Notified Body, depending on the system.

Registration (Art. 49)

Providers must register their high-risk AI systems in a publicly accessible EU database before placing them on the market or putting them into service.

Post-Market Monitoring (PMS) (Art. 72)

Establish and document a PMS system to proactively collect, document, and analyze data on the performance of high-risk AI systems throughout their lifetime, and take corrective actions if necessary.

Serious Incident Reporting (Art. 73)

Report any serious incidents and any malfunctioning of AI systems that might lead to such incidents to the relevant national competent authorities.

Guide: General Purpose AI (GPAI) Models

This section provides a detailed overview of General Purpose AI (GPAI) models as defined and regulated by the EU AI Act. Understanding these provisions is crucial if your organization develops, provides, or integrates GPAI models.

Defining GPAI Models (Article 3(63))

A GPAI model is defined as "an AI model, including where such an AI model is trained with a large amount of data using self-supervision at scale, that displays significant generality and is capable of competently performing a wide range of distinct tasks regardless of the way the model is placed on the market and that can be integrated into a variety of downstream systems or applications."

These are foundational components, like large language or image generation models, designed for broad applicability. "Significant generality" is a key characteristic.

Obligations for ALL Providers of GPAI Models (Article 53)

These baseline obligations apply from August 2, 2025, to all GPAI model providers:

  • Technical Documentation: Draw up and maintain up-to-date technical documentation (Art. 53(1)(a) & (b)). This must contain information for downstream providers to understand the GPAI model's capabilities, limitations, and comply with their own AI Act obligations. It must be provided to the AI Office, national authorities on request, and downstream providers.
  • Copyright Policy (Art. 53(1)(c)): Implement a policy to respect Union copyright law, particularly identifying and respecting reservations of rights under Article 4(3) of the Copyright Directive (EU) 2019/790.
  • Training Data Summary (Art. 53(1)(d)): Prepare and make publicly available a "sufficiently detailed summary" of the content used for training the GPAI model.

GPAI Models with Systemic Risk (Articles 51, 52, 55 & Annex XIII)

GPAI models with "high-impact capabilities" are deemed to pose systemic risks and face more stringent obligations.

Classification Criteria (Article 51):

  • Presumption by FLOPs: A model is presumed to have systemic risk if the cumulative amount of computation used for its training (measured in floating-point operations - FLOPs) is greater than $10^{25}$. The Commission can update this threshold.
  • Commission Designation: The Commission can designate other GPAI models as systemic risk based on criteria in Annex XIII, such as:
    • High number of parameters.
    • High number of registered users or business users.
    • Input and output modalities (e.g., text, image, speech, structured data).
    • Perceived or actual level of autonomy and sophistication.
    • Degree of scalability in terms of users or tasks.
    • Access to tools (e.g., internet access, ability to call APIs).

Provider Notification (Article 52(1)):

Providers must notify the Commission within two weeks if their GPAI model meets the systemic risk criteria (e.g., exceeds FLOPs threshold or they anticipate it will).

Additional Obligations for Systemic Risk GPAI Providers (Article 55):

  • Model Evaluations: Perform and document model evaluations, including adversarial testing, to identify and assess systemic risks.
  • Systemic Risk Assessment & Mitigation: Assess and mitigate possible systemic risks (e.g., to public health, safety, fundamental rights, society) throughout the model's lifecycle.
  • Serious Incident Reporting: Track, document, and report serious incidents and corrective measures to the AI Office and national authorities.
  • Cybersecurity Protection: Ensure adequate cybersecurity for the model and its physical infrastructure.

Open-Source GPAI Models (Article 53(2))

Providers of GPAI models released under a free and open-source license that allows access, use, modification, and distribution of the model are exempt from certain obligations under Art. 53(1)(a) (technical documentation to AI Office/authorities) and Art. 53(1)(b) (technical documentation to downstream providers), provided the model's parameters, architecture, and usage information are publicly available.

However, this exemption does NOT apply to GPAI models with systemic risk. Systemic risk GPAI models must comply with all obligations, regardless of their open-source status.

Interaction with AI System Rules & Fine-tuning

A GPAI model is a component. An AI system incorporating it still needs its own risk classification (Prohibited, High, Limited, Minimal) based on its specific intended purpose.

If an organization develops a high-risk AI system that integrates a systemic risk GPAI model they also developed, they face compounded compliance burdens (Chapter III for the system + Art. 53 & 55 for the model).

Entities fine-tuning an existing GPAI model may become providers of a new GPAI model, incurring provider obligations for the modification/fine-tuning. The AI Office is expected to provide further clarifications.

Guide: Resources & Key Timelines

Key Definitions (Simplified)

  • AI System: Machine-based, autonomous, infers outputs.
  • Provider: Develops AI system or has it developed, places on market/service under own name/trademark.
  • Deployer: Uses AI system under its authority (except personal non-professional activity).
  • Intended Purpose: Use for which an AI system is intended by the provider.
  • GPAI Model: AI model with significant generality, capable of wide range of tasks.

Key Application Dates

  • Feb 2, 2025: Most prohibitions (Art. 5) apply.
  • Aug 2, 2025: Rules on GPAI models apply.
  • Aug 2, 2026: Majority of AI Act provisions apply (incl. most High-Risk).
  • Aug 2, 2027: Obligations for High-Risk AI (Annex I, Section A products).

This is a simplified overview. Refer to official text.

Further Information

For complete and official texts, guidelines, and updates, refer to official EU sources:

This guide is for informational purposes only and does not constitute legal advice. Consult with legal professionals for specific guidance on EU AI Act compliance.