In this article, which follows on from The AI Act - an outline of the core provisions and key dates | Beauchamps, we set out how the AI Act could apply to your business and the measures your business can take to ensure compliance with it. By taking proactive steps, businesses can ensure that they remain ahead of the regulatory curve and avoid potential penalties. As the AI Act sets a global precedent for AI governance, it will have wide-reaching implications that extend beyond the European market, making compliance a critical priority for businesses worldwide.
Does it apply to my business?
The AI Act does not merely apply to businesses driven by or focused on AI. Most businesses at some level already fall under the scope of the AI Act or almost certainly will before long. From the risk categorisations, we can see that the AI Act is focused not just on what AI was designed to be used for, but rather how it is in fact used. The reasoning for this can be understood by the fact that AI can have numerous applications, and be trained on varying data sets, meaning that an AI product can have varying results and uses for different businesses and organisations.
The AI Act applies to both private and public sector entities whether they are based inside or outside the EU. Definitions of the types of operators that it applies to are given in Chapter I - Providers, Importers, Distributors and Deployers (Users).
AI use examples and obligations under the AI Act
The below table sets out examples of AI used in businesses followed by the likely category that they would fall into under the AI Act, and resulting obligations from the perspective of a deployer/user.
AI Use case example | Likely category under the AI Act | Obligation pursuant to the AI Act |
AI systems that analyse facial data; or categorise biometric data in the workplace
| Unacceptable risk | Prohibited as of 2 February 2025. |
Customer service chat-bots | Limited risk | Obliged to inform customers that it is a bot |
Chat GPT, or industry specific Gen AI Assistants | Most likely Limited/minimal risk – may require impact assessment | No obligation unless AI is used to generate or manipulate text which is published for the purpose of informing the public. This does not apply where the AI content undergoes human review.
|
Generative AI Assistants (e.g. chatbots) | Limited risk | Subject to transparency obligations |
AI Systems that evaluate a credit-score or creditworthiness of | High risk | Requires human oversight and quality management systems among further transparency requirements |
Fraud-Detection AI in Financial Services | Limited risk (listed as an exception to Annex III-high risk categorisation) | Requires human oversight and quality management systems among further transparency requirements |
Pricing and risk assessment for Health Insurance | High-risk | Requires human oversight and quality management systems among further transparency requirements |
Recruiting employees: including for placing targeted job advertisements, screening or filtering applications and evaluating candidates
| High risk | Requires human oversight and quality management systems among further transparency requirements |
AI systems used to evaluate eligibility of individuals for Public Housing, or other forms of public assistance. | High risk | In addition to human oversight, transparency and quality management enquiries, they must perform an assessment of fundamental rights that the use of the system may produce (Article 27) |
Spam filters, for email and communications | Minimal risk | No obligations, businesses can voluntarily adopt codes of conduct. |
What can my business do?
Is important for businesses to begin understanding what types of AI they currently deploy. It would be worthwhile to consider the following measures.
- Risk Assessment - Identify AI systems and their risk class under the AI Act, particularly high-risk AI systems, within your organization and ensure they meet the necessary compliance obligations under the AI Act.
- Documentation & Transparency - Prepare detailed reports, update current systems and document how they meet safety, transparency, and ethical requirements. Customer interfaces may be required to ensure compliance with transparency and accountability requirements.
- Internal Compliance Structures - Establish dedicated compliance teams or work with external legal advisors to create and carry out monitoring and reporting procedures.
- Continuous Education - Stay informed about changes to the AI regulatory landscape and the evolving enforcement and advisory role of the AI Office and NCAs, particularly the Irish DPC.
For more information, please contact Damian Maloney, Franklin O'Sullivan or your usual contact in Beauchamps.