The EU AI Act: How to Adapt Quickly and Safely for Profit

By
Maryrose Lyons | Ai Institute Founder
,
August 21, 2024

In an era where artificial intelligence (AI) is rapidly transforming the business landscape. Accountants find themselves at the intersection of financial expertise and technological innovation.

The European Union's Artificial Intelligence Act (EAA) represents a landmark shift in AI regulation, with significant implications for accountants and business owners. 

To help guide your first steps and adapt to the new environment, this article looks at the key aspects of the EAA that are relevant to accountants. It also provides guidance on how to integrate AI within your practice. Quickly. And safely. For the pursuit of more profit.

The Five Basic Facts of the Act

As with any new law, there are devils in the details. But this is enough to get you started:

  1. The EU AI Act is designed to create a comprehensive legal framework for the development, deployment and use of AI systems.
  2. Its full text, expected to be published in late July 2024, becomes law in mid-August.
  3. Rules on prohibited AI systems and AI literacy will come into force in February next year - giving you (just) enough time to adjust and adapt.
  4. Implementation will be overseen by a European Artificial Intelligence Board, with national supervisory authorities in each member state. 
  5. Non-compliance penalties can reach up to €30 million or 6% of global annual turnover, whichever is higher. 

The Act’s Scope & Purpose

Much has already been made of one of the Act’s primary objectives - protecting democracy and the other fundamental rights of EU citizens. It outright bans applications which are likely to alienate these important freedoms. 

However, legislators are also very clear they do not want to leave EU nations at a competitive disadvantage. Which is why they have attempted to write the Act so it still fosters innovation and investment, as well as enhancing the governance and legal certainty around the tech.

To achieve this balance, the Act adopts a risk-based approach, placing AI use into four risk categories: (i) Unacceptable; (ii) High; (iii) Limited; and (iv) Minimal. 

Each category requires a different approach from owners and users of the systems. It ranges from a complete ban on some activities to others which remain entirely unaffected. The new complexity means it is important for everyone, not least accountants, to understand which activities are categorised where - and the broad implications of each.

i. Unacceptable risks

These will be prohibited in the Act and, it is hoped, will help reassure us that our legal rights remain intact. The following activities will be illegal next year.

  • Using AI for subliminal, manipulative or deceptive techniques to distort human behaviour in ways that cause, or are likely to cause significant harm;
  • Exploiting vulnerabilities of specific groups - such as children, disabled people or those in economic distress - to negatively distort their behaviour with AI;
  • Scoring people’s behaviours or personal characteristics as a public body to evaluate or classify individuals;
  • Remotely identifying individuals in ‘real-time’ using remote biometric information, such as facial recognition, in public for law enforcement - although narrow exceptions do apply. 

ii. High risks

Despite being deemed high-risk, these activities will continue to be permitted. However, they will be subject to strict legal obligations. Systems which fall into this area include those used for:

  • Recruitment, promotion, task allocation and other worker management tasks;
  • Safety components of products covered by existing legislation - such as medical devices, toys and machinery;
  • Access to credit scores, healthcare and emergency services; and
  • Permitted forms of biometric identification and categorisation, 

If you - or your clients - use AI which falls into the high-risk category, the duties include:

  • The ability to demonstrate a high level of robustness, accuracy and security;
  • Application of appropriate human oversight to ensure the system is operating as intended; 
  • Clear and adequate information for the system user, such that they know how to operate it successfully;
  • Risk assessment and mitigation systems, which must be implemented ahead of time;
  • High-quality datasets to train the AI;
  • The logging of all activity, to ensure traceability of each output; and
  • Creation and maintenance of detailed documentation, ahead of any request from authorities;

iii. Limited risks

In order to avoid unnecessary burdens and allow for competition, activities thought by legislators to be of  ‘limited’ risk will continue unabated.  With one caveat - users must be aware they are interacting with AI. 

ChatGPT, Copilot, other LLMs and generative AI fall into this category. As do emotion recognition systems.  And content creators will have to disclose when text, audio, images or videos have been generated or manipulated by AI. This is particularly the case when the content relates to matters of public interest.

iv. Minimal risks

Some tools, by contrast, will have no need to declare their use of AI and so - in practical terms - are unaffected by the Act. They tend to be narrower-purpose applications which, for instance, are embedded into video games, spam filters, writing apps to check spelling or websites for shopping recommendations.

Five Steps To Stay On The Right Side for Certified Accountants

So, given all that, what are the implications for you? And what do you need to do now?

1. Risk Assessment and Compliance: 

As a certified accountant and business owner, you'll need to assess whether any AI systems you use or plan to implement, fall under the high-risk category. This is particularly relevant if you're using AI for:

  • Credit scoring or loan approval processes;
  • HR management and recruitment;
  • Fraud detection and prevention; and
  • Automated financial reporting and analysis

If your AI systems are classified as high-risk, you'll need to ensure compliance with the stringent requirements outlined in the Act. 

2. Data Quality and Management: 

The new law places significant emphasis on the quality of data used to train AI systems. As certified accountants often deal with sensitive financial data, you'll need to:

  • Implement robust data collection and preprocessing methods;
  • Ensure all data is accurate, complete and representative;
  • Regularly audit and update your datasets; and
  • Implement strong data protection and privacy measures in line with GDPR requirements

3. Transparency and Client Communication: 

When using AI systems in your practice, particularly those that interact directly with clients, you'll need to:

  • Clearly disclose the use of AI to your clients;
  • Explain how AI-driven decisions are made, especially in high-stakes situations like credit assessments; and
  • Provide options for human intervention when requested.

The Ai Institute has created an AI Policy template which includes all of the areas that need to be addressed in order to stay on-side with the Law.  To skip ahead of the crowd, you can download it here.

4. Professional Development: 

In order to stay compliant with the Act, all businesses should invest in AI literacy training for themselves and their staff, which is something the Ai Institute can help with. In addition, as it’s a fast-paced environment which is continually evolving, everyone must stay updated on AI regulations and best practices.  Get in touch with us to chat in more detail.

5. Ethical Considerations: 

The EU AI Act emphasises the importance of ethical AI deployment. As trusted financial advisors, certified accountants have a responsibility to ensure that AI systems are used ethically in their practice. In practical terms, this means developing an AI Policy that aligns with your professional values and the requirements of the Act.  The policy should address:

  • Transparency and explainability
  • Fairness and non-discrimination
  • Privacy and data protection
  • Accountability and liability
  • Human oversight and intervention

Here Lies Opportunity

The EU Ai Act is not a trivial thing. For the first time, it provides the tramlines for AI use and development in the EU. The implications for all of us - business owners and certified accountants - are hugely significant, The Act does present challenges. But it also offers opportunity. 

By proactively addressing the regulations and the wider ethical considerations of AI, you can safely leverage the technology to enhance your practice. All the while maintaining the trust and confidence of your team and clients. This is important because the competitive benefits for those who adopt early are likely to be significant. 

But early adoption will not be enough. The AI landscape will continue to evolve. Rapidly. This means staying informed, remaining flexible and always being committed to best practices will be essential. Your practice’s methods are likely to iterate faster tomorrow than they do today. Which is almost certainly far quicker than they did yesterday. And that rate of change will likely only get faster. It is this that makes your commitment to the principles of ethical AI, as well as the letter of the EU AI Act, vital to future profitability. You will have to adapt not only quickly but also safely.

If certified accountants can collectively position themselves as leaders in the responsible use of AI, they may well end up setting the standard for ethical innovation across the entire financial sector.

Start your Ai journey today

Have a question?

Ask away! One of our team will get to you shortly.

Join our Ai Community

Sign up for our newsletter and receive a free Ai policy starter kit directly to your inbox.

Thanks! We’re happy to have you as part of our community.
Oops! Something went wrong while submitting the form.
Linkedin LogoYouTube IconInstagram LogoTikTok Icon