The Artificial Intelligence Act (Regulation (EU) 2024/1689) (AI Act) entered into force on 1 August 2024 and is the first ever comprehensive legal framework on AI worldwide. While nominally the 'Act', it is in fact an EU regulation that is directly applicable in Ireland. The aim of the new legislation is to enhance the trustworthiness of AI, by ensuring that AI is supervised and certain rules enforced so that fundamental rights, safety and ethical principles are upheld.
AI categories under the AI Act
The AI Act sets out a classification system based on the types of uses and effects that AI systems can have in various circumstances. AI systems are classified into four categories - unacceptable risk, high risk, limited risk, and minimal risk.
- Unacceptable Risk - AI systems that violate fundamental rights or pose a clear threat to human rights, such as social scoring, mass surveillance or manipulative AI. AI systems which fall within this class are prohibited.
- High Risk - AI systems that have a significant impact on people's lives or rights, such as in the areas of biometric identification, recruitment, education, health, justice, or law enforcement. These systems are the focus of the majority of obligations created in the AI Act, such as human oversight, data quality, transparency, accuracy, security, and conformity assessment.
- Limited Risk - AI systems that involve some interaction with users such as chatbots, online platforms, or video games. These systems are subject to lighter transparency obligations such as informing users that they are interacting with an AI system and allowing them to opt out.
- Minimal Risk - AI systems that pose no or negligible risk, such as spam filters, smart appliances, or in entertainment. These systems will be largely exempt from any new obligations pursuant to the AI Act.
Who does the AI Act apply to?
The AI Act applies to both private and public sector entities whether they are based inside or outside the EU. Definitions of the types of operators that it applies to are given in Chapter I are set out below.
Provider - an entity that engages in the placing on the market or putting into service of AI systems or placing on the market general-purpose AI models in the EU, irrespective of whether the provider is established or located within the EU or in a third country.
Importer - any person located or established in the EU that places on the market an AI system which carries the name or trademark of a natural or legal person established outside the EU is an importer.
Distributor - any person in the supply chain, other than provider or the importer, that makes an AI system available on the EU market.
Deployer (User) - any natural or legal person, that uses an AI system under its authority in a professional capacity whether located inside or outside the EU providing that the AI system's output is used in the EU.
Penalties for non-compliance
Penalties for non-compliance with the AI Act are significant and will have the potential to severely impact the provider's or deployer's business. Penalties are as follows and may amount to the stated maximum figures, or percentages of annual turnover, whichever is higher:
Type of Breach | Maximum Figure | % of total worldwide annual turnover for preceding financial year |
Breach of prohibition of unacceptable risk AI | €35 million | 7% |
Breaches of high risk, limited risk and minimal risk transparency obligations | €15 million | 3% |
Breach of reporting requirements and obligation not to supply incorrect, incomplete or misleading information to NCAs or EU supervisory bodies | €7.5 million | 1% |
Upcoming key dates
2 February 2025: Unacceptable-risk systems will be prohibited
Organisations building AI systems or using AI as part of their EU products and services will need to prove that their systems comply with Article 5 of the AI Act, so as to not be using AI systems categorised as "Unacceptable Risk".
2 August 2025: General-Purpose AI obligations will take effect
Companies deploying high-risk AI technologies will need to ensure that their systems meet the AI Act’s requirements for risk management, transparency, data governance and human oversight.
2 August 2026: remaining provisions come into force
This includes the application of rules for AI systems in lower-risk categories, which will be subject to less stringent but still significant obligations. Companies operating in sectors that rely on AI technologies must start aligning their practices with the requirements outlined in the AI Act by this time, ensuring compliance across AI systems in all risk categories.
For more information, please contact Damian Maloney, Franklin O'Sullivan or your usual contact in Beauchamps.