Compliance
AI Act

EU AI Act — Risk-Based AI Regulation Compliance Guide

The world's first comprehensive AI law — risk-based framework for AI systems in the EU with fines up to €35M or 7% of global turnover.

What is AI Act?

The EU AI Act (Regulation 2024/1689) is the world's first comprehensive legal framework for artificial intelligence. Published on 12 July 2024 with 113 articles and 180 recitals, it establishes a risk-based classification system with four tiers: unacceptable risk (banned), high risk (strict conformity assessment), limited risk (transparency obligations), and minimal risk (no specific requirements). It also introduces dedicated rules for general-purpose AI (GPAI) models. Prohibitions took effect on 2 February 2025, high-risk obligations apply from 2 August 2026, and full enforcement occurs by 2 August 2027.

Who It Applies To

Providers who develop or place AI systems on the EU market, deployers who use AI systems, importers and distributors, product manufacturers integrating AI, and GPAI model providers. Like GDPR, it has extraterritorial reach — covering providers outside the EU if their AI output is used within the Union.

Key Articles & Obligations

Article 3

Article 5

Article 6

Article 8

Article 9

Article 10

Article 50

Key Deadlines

Prohibitions apply

2 Feb 2025

Banned AI practices (social scoring, manipulative subliminal techniques) no longer permitted.

High-risk obligations

2 Aug 2026

Conformity assessment and transparency requirements for high-risk AI systems.

Full enforcement

2 Aug 2027

All AI Act provisions, including GPAI model obligations, fully applicable.

Fines & Enforcement

Three tiers: up to €35M or 7% for prohibited AI practices; up to €15M or 3% for high-risk AI system violations; up to €7.5M or 1.5% for providing incorrect or misleading information.

AI Risk Classification System

The AI Act classifies AI systems into four risk categories, each with different obligations.

  • Unacceptable risk — banned outright: social scoring, real-time biometric identification in public spaces (with narrow exceptions), emotion recognition in workplaces/schools, predictive policing based solely on profiling
  • High risk — strict conformity assessment: AI in critical infrastructure, education, employment, law enforcement, migration, healthcare, and justice
  • Limited risk — transparency obligations: chatbots must disclose AI nature, deepfakes must be labelled, emotion recognition systems must inform users
  • Minimal risk — no specific requirements: most AI systems including spam filters, AI-enabled video games, inventory management

Obligations for High-Risk AI Systems

High-risk AI systems face the most stringent requirements under the AI Act.

  • Risk management system — continuous, iterative process throughout the AI system lifecycle
  • Data governance — training, validation, and testing datasets must meet quality criteria
  • Technical documentation — comprehensive documentation demonstrating compliance
  • Record-keeping — automatic logging of the AI system's operation
  • Transparency and information provision — clear instructions for deployers
  • Human oversight — measures enabling human oversight to prevent or minimise risks
  • Accuracy, robustness, and cybersecurity — consistent performance throughout lifecycle
  • Conformity assessment — before placing on the market, high-risk AI must undergo assessment

General-Purpose AI (GPAI) Models

The AI Act introduces a new regulatory category for foundation models and GPAI systems.

  • All GPAI model providers must publish technical documentation and training data summaries
  • Providers of GPAI models with systemic risk (based on computing power thresholds) face additional obligations
  • Systemic risk models must perform model evaluation, adversarial testing, and track/incident reporting
  • Compliance with the EU AI Code of Practice is presumed to demonstrate conformity

How Law4Devs Helps with AI Act Compliance

Law4Devs provides all 113 AI Act articles as structured JSON from EUR-Lex. Filter by risk category, obligation type, actor role, and AI system type. Cross-reference with GDPR, the CRA, and sector-specific regulation for a complete compliance picture.

Related Regulations

Query AI Act via API

GET /v1/frameworks/ai-act/articles
200 OK · structured JSON · official EUR-Lex source

Frequently Asked Questions

What is the EU AI Act?

The EU AI Act (Regulation EU 2024/1689) is the world's first comprehensive legal framework for artificial intelligence. Published on 12 July 2024 and containing 113 articles with 180 recitals, it establishes a risk-based classification system with four tiers: unacceptable risk (banned outright, such as social scoring and real-time biometric identification in public spaces), high risk (subject to strict conformity assessment, including AI in critical infrastructure, education, employment, law enforcement, and migration), limited risk (transparency obligations, such as chatbots and deepfakes), and minimal risk (no specific requirements). The Act also introduces dedicated rules for general-purpose AI (GPAI) models, including transparency requirements and systemic risk evaluations for the most capable models. Prohibitions on unacceptable-risk AI practices took effect on 2 February 2025. High-risk obligations and transparency requirements apply from 2 August 2026. Full enforcement of all provisions occurs by 2 August 2027. Law4Devs provides all 113 AI Act articles as structured JSON, queryable by risk category and obligation type.

Who does the AI Act apply to?

EU Regulation 2024/1689 applies to providers who develop or place AI systems on the EU market, deployers who use AI systems under their authority, importers and distributors of AI systems, product manufacturers integrating AI into products covered by EU harmonisation legislation, and authorised representatives of non-EU providers. Like GDPR, the AI Act has extraterritorial reach — it covers providers established outside the EU if the output of their AI system is used within the Union. It also applies to public authorities and EU institutions except when AI systems are used exclusively for military, defence, or national security purposes. GPAI model providers, regardless of where they are established, must comply if their models are placed on the EU market. Small and medium enterprises receive certain exemptions from fees and some procedural requirements, but not from core safety obligations for high-risk systems. Law4Devs lets you filter AI Act articles by provider or deployer role, risk category, and sector to identify your exact obligations.

What are AI Act fines?

Under EU Regulation 2024/1689, AI Act fines are structured in three tiers reflecting the severity of the violation. The highest tier imposes fines up to €35 million or 7% of the organisation's total worldwide annual turnover of the preceding financial year, whichever is higher, for deploying prohibited AI practices such as social scoring or manipulative subliminal techniques. The second tier reaches up to €15 million or 3% of global turnover for violations of high-risk AI system requirements, including conformity assessment failures, inadequate risk management, or insufficient data governance. The third tier imposes up to €7.5 million or 1.5% of global turnover for supplying incorrect, incomplete, or misleading information to notified bodies or national competent authorities. For SMEs and startups, the lower of the two amounts in each tier applies. These are among the highest penalty rates in EU regulatory law, exceeding even GDPR's 4% maximum. Law4Devs structures all AI Act penalty and enforcement provisions as queryable JSON to help teams assess exposure by risk category.

How does Law4Devs help with the AI Act?

Law4Devs provides all 113 articles of EU Regulation 2024/1689 as structured, machine-readable JSON via a REST API, sourced directly from EUR-Lex. Engineering and compliance teams can query AI Act articles by risk classification (unacceptable, high, limited, minimal), obligation type (conformity assessment, transparency, risk management, data governance, post-market monitoring), actor role (provider, deployer, importer, distributor), and AI system category. Dedicated endpoints cover GPAI model obligations, prohibited practices, and high-risk system requirements in specific sectors such as healthcare, education, employment, and law enforcement. Each response includes the full legal text, article metadata, semantic tags, and cross-references to related provisions in GDPR, the CRA, and sector-specific regulation. The API tracks EUR-Lex amendments automatically, so integrations always reflect the latest consolidated text. Responses average 34 milliseconds, suitable for embedding in AI governance dashboards, model risk registries, or compliance automation pipelines. Official SDKs are available for Python, TypeScript, Java, Rust, PHP, and Dart.

Access AI Act as Structured JSON

All articles, recitals, and amendments — queryable, filterable, and always up to date.