The European Parliament has approved the world's first binding law on artificial intelligence (AI), the Artificial Intelligence Act. The regulation, agreed in negotiations with member states in December 2023, was endorsed by MEPs on 13 March 2024. It constitutes a landmark legal framework that aims to protect fundamental rights and the rule of law, and to ensure that AI is ethical and trustworthy.
Risk based approach
The regulation introduces a risk-based approach that classifies AI systems into four categories: prohibited, high-risk, limited-risk, and minimal-risk.
- Prohibited AI systems are those that violate fundamental rights or pose a clear threat to human rights, such as social scoring or mass surveillance.
- High-risk AI systems are those that have a significant impact on people's lives or rights, such as biometric identification, recruitment, education, health, justice, or law enforcement. These systems will be subject to strict requirements, such as human oversight, data quality, transparency, accuracy, security, and conformity assessment.
- Limited-risk AI systems are those that involve some interaction with users, such as chatbots, online platforms, or video games. These systems will have to inform users that they are interacting with an AI system and allow them to opt out.
- Minimal-risk AI systems are those that pose no or negligible risk, such as spam filters, smart appliances, or entertainment. These systems will be largely exempt from the regulation but will still have to comply with existing laws and ethical principles.
Transparency
Of note is that General-purpose AI (GPAI) systems, and the GPAI models they are based on, must meet certain transparency requirements, including compliance with EU copyright law and publishing detailed summaries of the content used for training. The more powerful GPAI models that could pose broader risks will face additional requirements, including performing model evaluations, assessing and mitigating systemic risks, and reporting on incidents. Additionally, artificial or manipulated images, audio or video content (deepfakes) are required to be clearly labelled as such under the Act.
Supervision
The regulation also establishes a governance structure at both EU and national levels, to ensure the effective implementation and enforcement of the rules. The regulation creates a European Artificial Intelligence Board (EAIB), composed of representatives from the Commission and the national supervisory authorities (yet to be identified in Ireland), to facilitate cooperation, provide guidance and monitor the application of the regulation. The regulation also empowers the national supervisory authorities to monitor compliance, conduct investigations, impose corrective measures and sanctions, and cooperate with other authorities across the EU. It also provides for a system of administrative fines, which can reach up to 6% of the annual worldwide turnover of the provider or user of an AI system, for the most serious infringements of the rules.
Next steps
The regulation is still subject to a final lawyer-linguist check after the Council of the EU formally adopts it and it is published in the Official Journal of the EU. It will enter into force (with direct effect) twenty days after its publication in the official Journal and be fully applicable 24 months after its entry into force, except for: bans on prohibited practices, which will apply six months after the entry into force date; requirements for codes of practice (9 months); implementation of general-purpose AI rules including governance (12 months); and obligations for high-risk systems (36 months).
The regulation has significant implications for the development, deployment, and use of AI systems in the EU and beyond, and will require businesses, public authorities, and civil society to be proactive in adapting and complying with the new legal landscape.
This article was contributed by Franklin O'Sullivan.
For more information please contact Damian Maloney, Franklin O'Sullivan or your usual contact in Beauchamps.