AI Act is finally here!

Aušra Mažutavičienė
Written by
Aušra Mažutavičienė
on
December 15, 2023

EU officials made history last week by enduring almost 36 hours of grueling debate in order to finally settle on a first of its kind, comprehensive AI safety and transparency framework – the AI Act.

Let's dive in.

What is the AI Act?

The AI Act is the new legal framework that sets crucial requirements for AI systems developers, deployers (such as OpenAI) and users.

The AI Act takes a “risk-based approach” to products or services that use AI and focuses on regulating uses of AI rather than the technology. The riskier an AI application is, the stiffer the rules.

The new legislation prohibits AI systems which pose an "unacceptable risk" from being deployed in the EU and in other cases imposes different levels of obligations on AI systems that are categorised as "high risk" or "limited risk".

AI developers and deployers will also need to comply with EU copyright law and summarize the content they used for training.

When will the AI Act come into effect?

The final text of the legislation has yet to be published.

The AI Act won’t take effect until two years after final approval from European lawmakers, expected in early 2024. So it is likely that the AI Act will come into effect in 2026.

The EU will urge companies to begin voluntarily following the rules in the interim by launching an "AI Pact". But there are no penalties if they don't.

Who will the AI Act apply to?

The AI Act will apply to providers, deployers and users of in-scope AI systems used in the EU, irrespective of where they’re established. So providers and deployers of AI systems in third countries, e.g. the US, will have to comply with the AI Act if the output of the system is used in the EU.

What are the requirements?

The requirements of the AI Act differ depending on the risk level posed by the AI system.

For example, AI systems presenting a limited risk will be subject to more light touch transparency obligations, such as informing users that the content they are engaging with is AI generated.

High-risk AI systems will be subject to tougher requirements and obligations, such as the need to carry out a mandatory fundamental rights impact assessment. People will have a right to receive explanations about decisions based on the use of high-risk AI systems that affect their rights.  

AI systems presenting unacceptable risk will be banned.

Examples include:

  • Limited risk: chat bots or deepfakes;
  • High risk: AI used in sensitive systems, e.g. welfare, employment, education; and
  • Unacceptable risk: social scoring based on social behaviour or personal characteristics, emotion recognition in the workplace and biometric categorisation to infer sensitive data, such as sexual orientation.

What are the fines?

Violations of the AI Act could draw fines of up to 35 million euros ($38 million) or 7% of a company’s global revenue:

  • For infringing the rules on prohibited practices, companies may be subject to fines of up to EUR 35 million or 7% of their global annual turnover.
  • For infringement of the general obligations set out by the AI Act, the fines may be up to EUR 15 million or 3%.
  • If companies supply incorrect information, fines may be up to EUR 7.5 million or 1.5%.

In addition, the political agreement envisages more proportionate caps on administrative fines for SMEs and start-ups.

Why does it matter?

It’s the first comprehensive AI legislation. This law might therefore become the regulatory standards — much as it did on privacy rules.

It’s difficult to predict what the AI landscape will look like in 2026. Things are moving fast.