The world's first comprehensive AI law — risk-based framework for AI systems in the EU with fines up to €35M or 7% of global turnover.
The EU AI Act (Regulation 2024/1689) is the world's first comprehensive legal framework for artificial intelligence. Published on 12 July 2024 with 113 articles and 180 recitals, it establishes a risk-based classification system with four tiers: unacceptable risk (banned), high risk (strict conformity assessment), limited risk (transparency obligations), and minimal risk (no specific requirements). It also introduces dedicated rules for general-purpose AI (GPAI) models. Prohibitions took effect on 2 February 2025, high-risk obligations apply from 2 August 2026, and full enforcement occurs by 2 August 2027.
Providers who develop or place AI systems on the EU market, deployers who use AI systems, importers and distributors, product manufacturers integrating AI, and GPAI model providers. Like GDPR, it has extraterritorial reach — covering providers outside the EU if their AI output is used within the Union.
Article 3
Article 5
Article 6
Article 8
Article 9
Article 10
Article 50
Prohibitions apply
2 Feb 2025Banned AI practices (social scoring, manipulative subliminal techniques) no longer permitted.
High-risk obligations
2 Aug 2026Conformity assessment and transparency requirements for high-risk AI systems.
Full enforcement
2 Aug 2027All AI Act provisions, including GPAI model obligations, fully applicable.
Three tiers: up to €35M or 7% for prohibited AI practices; up to €15M or 3% for high-risk AI system violations; up to €7.5M or 1.5% for providing incorrect or misleading information.
The AI Act classifies AI systems into four risk categories, each with different obligations.
High-risk AI systems face the most stringent requirements under the AI Act.
The AI Act introduces a new regulatory category for foundation models and GPAI systems.
Law4Devs provides all 113 AI Act articles as structured JSON from EUR-Lex. Filter by risk category, obligation type, actor role, and AI system type. Cross-reference with GDPR, the CRA, and sector-specific regulation for a complete compliance picture.
GET /v1/frameworks/ai-act/articles → 200 OK · structured JSON · official EUR-Lex source
The EU AI Act (Regulation EU 2024/1689) is the world's first comprehensive legal framework for artificial intelligence. Published on 12 July 2024 and containing 113 articles with 180 recitals, it establishes a risk-based classification system with four tiers: unacceptable risk (banned outright, such as social scoring and real-time biometric identification in public spaces), high risk (subject to strict conformity assessment, including AI in critical infrastructure, education, employment, law enforcement, and migration), limited risk (transparency obligations, such as chatbots and deepfakes), and minimal risk (no specific requirements). The Act also introduces dedicated rules for general-purpose AI (GPAI) models, including transparency requirements and systemic risk evaluations for the most capable models. Prohibitions on unacceptable-risk AI practices took effect on 2 February 2025. High-risk obligations and transparency requirements apply from 2 August 2026. Full enforcement of all provisions occurs by 2 August 2027. Law4Devs provides all 113 AI Act articles as structured JSON, queryable by risk category and obligation type.
EU Regulation 2024/1689 applies to providers who develop or place AI systems on the EU market, deployers who use AI systems under their authority, importers and distributors of AI systems, product manufacturers integrating AI into products covered by EU harmonisation legislation, and authorised representatives of non-EU providers. Like GDPR, the AI Act has extraterritorial reach — it covers providers established outside the EU if the output of their AI system is used within the Union. It also applies to public authorities and EU institutions except when AI systems are used exclusively for military, defence, or national security purposes. GPAI model providers, regardless of where they are established, must comply if their models are placed on the EU market. Small and medium enterprises receive certain exemptions from fees and some procedural requirements, but not from core safety obligations for high-risk systems. Law4Devs lets you filter AI Act articles by provider or deployer role, risk category, and sector to identify your exact obligations.
Under EU Regulation 2024/1689, AI Act fines are structured in three tiers reflecting the severity of the violation. The highest tier imposes fines up to €35 million or 7% of the organisation's total worldwide annual turnover of the preceding financial year, whichever is higher, for deploying prohibited AI practices such as social scoring or manipulative subliminal techniques. The second tier reaches up to €15 million or 3% of global turnover for violations of high-risk AI system requirements, including conformity assessment failures, inadequate risk management, or insufficient data governance. The third tier imposes up to €7.5 million or 1.5% of global turnover for supplying incorrect, incomplete, or misleading information to notified bodies or national competent authorities. For SMEs and startups, the lower of the two amounts in each tier applies. These are among the highest penalty rates in EU regulatory law, exceeding even GDPR's 4% maximum. Law4Devs structures all AI Act penalty and enforcement provisions as queryable JSON to help teams assess exposure by risk category.
Law4Devs provides all 113 articles of EU Regulation 2024/1689 as structured, machine-readable JSON via a REST API, sourced directly from EUR-Lex. Engineering and compliance teams can query AI Act articles by risk classification (unacceptable, high, limited, minimal), obligation type (conformity assessment, transparency, risk management, data governance, post-market monitoring), actor role (provider, deployer, importer, distributor), and AI system category. Dedicated endpoints cover GPAI model obligations, prohibited practices, and high-risk system requirements in specific sectors such as healthcare, education, employment, and law enforcement. Each response includes the full legal text, article metadata, semantic tags, and cross-references to related provisions in GDPR, the CRA, and sector-specific regulation. The API tracks EUR-Lex amendments automatically, so integrations always reflect the latest consolidated text. Responses average 34 milliseconds, suitable for embedding in AI governance dashboards, model risk registries, or compliance automation pipelines. Official SDKs are available for Python, TypeScript, Java, Rust, PHP, and Dart.
All articles, recitals, and amendments — queryable, filterable, and always up to date.