The New EU Artificial Intelligence Act – The 10 key things you need to know now

1. A global leader for AI regulation

On 8 December 2023, the European institutions reached provisional political agreement on the world’s first comprehensive law on artificial intelligence: The New AI Act by European Union.

With the political agreement, the EU moves toward becoming the first major world power to enact laws governing AI. Friday’s deal between EU countries and European Parliament members came after nearly 15 hours of negotiations that followed an almost 24-hour debate the previous day.

The two sides are set to hash out details in the coming days, which could change the shape of the final legislation.

“Europe has positioned itself as a pioneer, understanding the importance of its role as a global standard setter. This is yes, I believe, a historical day,” European Commissioner Thierry Breton told a press conference.

The accord requires foundation models such as ChatGPT and general purpose AI systems (GPAI) to comply with transparency obligations before they are put on the market. These include drawing up technical documentation, complying with EU copyright law and disseminating detailed summaries about the content used for training.

High-impact foundation models with systemic risk will have to conduct model evaluations, assess and mitigate systemic risks, conduct adversarial testing, report to the European Commission on serious incidents, ensure cybersecurity and report on their energy efficiency.

This follows a trialogue composed of the EU Commission, Council and Parliament.

While this has been in negotiation in Europe for many years, last-minute debates regarding the regulation of foundational models and potential disagreements on whether the legislation would hamper innovation too much meant that last-minute agreement was required.

However, political agreement has been reached.

This means that the final text of the law is not yet fully finalised. Work will now continue on cleaning up the drafting. The text will then be formally adopted by both Parliament and Council to become EU law. It is expected that this will happen in early 2024.

Being the first legislative proposal of its kind in the world, the AI Act is also expected to set certain standards for AI regulations in other jurisdictions.

Below, we summarise the main issues addressed during the marathon 37-hour negotiations to reach political agreement.

We also provide comments on what organisations should do to prepare.

2. What is covered: the “AI system”

After much debate, the globally recognised standard developed by the OECD has been adopted. This should support a global consensus around the types of systems that are intended to be regulated as “Artificial Intelligence”. Note that the input on which the outputs are generated may be provided by machine (e.g. autonomous vehicle sensors) or humans (e.g. Chat GPT prompts). References to “content” as an output emphasise recent focus on generative AI as within scope of the legislation.

3. The risk-based approach

The parties to the trialogue confirmed a risk-based approach: the higher the risk, the stricter the rules.

The AI Act establishes obligations for AI, based on its potential risks and level of impact on individuals and society as a whole. Accordingly, AI systems are divided into systems of limited risk and those posing high risk. In addition, certain AI systems are prohibited (see item 5 below).

AI systems of limited risk will be subject to transparency requirements. For example, users should be aware of interacting with an AI system.

For AI systems classified as high risk (due to their significant potential harm to health, safety, fundamental rights, environment, democracy and the rule of law), strict obligations will apply. This includes rules on mandatory Fundamental Rights Impact Assessments, as well as (among others) Conformity Assessments, data governance requirements, registration in an EU database, risk management and quality management systems, transparency, human oversight, accuracy, robustness and cybersecurity. Examples of such systems include certain medical devices, recruitment, HR and worker management tools, and critical infrastructure management (e.g. water, gas, electricity etc.).

High-risk AI systems will require extensive governance activities to ensure compliance.

4. GPAI systems and foundation models

This was a key area of the last-minute negotiations.

Dedicated rules for general-purpose AI systems (GPAIs) will ensure transparency along the value chain. These rules include drawing up technical documentation, complying with EU copyright law and providing detailed summaries of the content used for training (increasing transparency).

For high-impact GPAI models which may create systemic risks, additional obligations will apply such as rules on model evaluations, systemic risks assessment and mitigation, adversarial testing, reporting to the Commission on serious incidents, cybersecurity and energy-efficiency reporting (until harmonised EU standards are published, GPAIs with systemic risk may rely on codes of practice to comply with the law). This has been introduced to seek to address some of the concerns about the societal risks that may be posed by the speed of development of these powerful tools. However, debate looks likely to extend to whether the bar for “high impact” has been set too high.

5. Banned AI systems

In the end, it was agreed to ban certain high-risk AI systems considered a clear threat to the fundamental rights of people, such as:

  • biometric categorisation systems that use sensitive characteristics (e.g. political, religious and philosophical beliefs, sexual orientation, race);
  • untargeted scraping of facial images from the internet or CCTV footage to create facial recognition databases;
  • emotion recognition in the workplace and educational institutions;
  • social scoring based on social behaviour or personal characteristics;
  • AI systems that manipulate human behaviour to circumvent their free will; and
  • AI used to exploit the vulnerabilities of people (due to their age, disability, social or economic situation).

Use of AI systems by law enforcement authorities for their institutional purposes will be subject to specific safeguards.

6. Promoting innovation

The AI Act promotes “regulatory sandboxes” and “real-world testing”, established by national authorities to develop and train innovative AI before placement on the market. This was seen as a key “win” for the political groups seeking to ensure a pro-innovation and supportive regulatory framework for AI to develop within the EU.

7. A new AI Regulator?

The EU institutions agreed on establishing new administrative infrastructures including:

  • An AI Office, which will sit within the Commission and will be tasked with overseeing the most advanced AI models, contributing to fostering new standards and testing practices, and enforcing the common rules in all EU member states. It seems likely this will become equivalent to the AI Safety Institutes that have recently been announced to be established in the UK and the US;
  • A scientific panel of independent experts, which will advise the AI Office about GPAI models and on the emergence of high-impact GPAI models, contribute to the development of methodologies for evaluating the capabilities of foundation models and monitor possible material safety risks related to foundation models;
  • An AI Board, which comprises EU member states’ representatives, will remain as a coordination platform and an advisory body to the Commission while contributing to the implementation of the AI Act (e.g. designing codes of practice); and
  • An advisory forum for stakeholders will be set up to provide technical expertise to the AI Board.

The above reference to independent experts and advisory forums can set an example for AI governance models in the private sector, with an active involvement of external stakeholders.

Also, it will be interesting to see which approach will be taken by the member states to establish the local AI Authorities i.e. whether they will empower existing authorities (e.g. Data Protection Authorities), or opt for other options (e.g. a new independent authority). The debate is still open.

8. What are the penalties?

Non-compliance with the rules will lead to fines ranging from €7.5 million or 1.5% of global turnover to €35 million or 7% of global turnover, depending on the infringement and size of the company.

9. When will it come into force?

The final text of the AI Act will likely be published in the Official Journal of the European Union at the beginning of 2024.

The AI Act would then become applicable two years after its entry into force. Some specific provisions will apply within six months, while the rules on GPAIs will apply within 12 months.

10. How businesses can prepare now for the entry into force of the AI Act

While waiting for the AI Act to be formally adopted (and to become fully applicable), organisations using or planning to use AI systems should start addressing impacts by mapping their processes and assessing the level of compliance of their AI systems with the new rules. The AI Act is the first formal legislation to begin to fill in the gaps of ethical and regulatory principles to which organisations must adhere when deploying AI.

Implementing an AI governance strategy should be the starting point. A robust strategy must be aligned with business objectives and identify areas within the business where AI will most benefit the organisation’s strategic goal. It will also require full alignment with the initiatives aimed at managing personal and non-personal data assets, in compliance with existing legislation.

Beyond that, implementation of a framework of policies and processes aimed at ensuring that only compliant developers are onboarded and/or models developed or deployed should be considered. Risks should be properly identified and mitigated, ensuring adequate monitoring and supervision throughout the AI system lifecycle. Measures ranging from internal trainings to market surveillance will be key. These can likely be developed from existing risk management processes – in particular, data protection risk assessments, vendor due diligence and audits.

Look carefully at what internal and external resources are required to support your governance activities. This milestone is likely to be a “starting gun” for the race for talent for AI governance professionals.

Finally, consider the approach globally. The EU AI Act is a leader but it will not be the only global law developed to address AI risks and promote trust. Any truly global strategy will need to accommodate the key requirement and principles of the EU AI Act but also be forward-looking in how other regulatory requirements may develop.

Related posts