The EU AI Act
It took 36 hours of negotiations, but the general form of the AI Act is now clear.
Artificial Intelligence (AI) systems that are deployed in the European Union (EU) – and requirements that refer to actors along the AI value chain, in particular, the AI system providers and deployers – are now subject to quality standards defined by the European Union.
As experts and advocates of trustworthy artificial intelligence systems, leiwand.ai welcomes the AI Act as the right step to safeguard our health, safety, and fundamental rights.
Yet, with the act in its current form, we see that there are many questions left to answer.
In this blog article, we will tell you what the EU AI Act is all about, our thoughts concerning its current state and questions we believe still need to be answered.
What is the EU AI Act and why do we need it?
Artificial intelligence has become part of our everyday existence and is having a profound effect on the way we live and work.
It has the potential to dramatically improve our lives in areas like medicine, where it can help diagnose deadly diseases, or support us in the search for new pharmaceuticals. Many hope that it can also help with other challenges – such as in combating climate change.
However, there is also a lot of potential of AI systems being used in harmful ways, such as privacy breaches, deepfakes, energy consumption and biased decision-making, just to name a few. AI thereby poses a challenge for the public, governments, and businesses alike.
The rules were further refined in trilogue discussions, interinstitutional negotiations between representatives of the European Parliament, the Council of the European Union and the European Commission.
Two years later, European Parliament members and the European Council have reached a consensus on a regulation aimed at guaranteeing the safety and trustworthiness of AI in Europe, upholding fundamental rights and democratic values, and simultaneously fostering the growth and expansion of businesses.
The AI Act will work with a definition closely aligned with the OECD's understanding of AI systems:
What falls under the AI Act – and what doesn’t
All AI systems need to adhere
The scope of the European Union Artificial Intelligence Act covers all AI systems deployed in the EU and imposes requirements on actors along the AI value chain.
Not all AI systems are created equal
While artificial intelligence systems in general fall under the AI Act, there are some that are explicitly excluded.
AI systems that fall under the purview of national security are exempt from the Act.
Yes, that means military and defense applications.
Also, AI systems used purely for Research and Development are out of scope.
Banned Artificial Intelligence Systems
AI systems that pose an unacceptable risk to humans will be banned:
biometric categorisation systems that use sensitive characteristics (e.g. political, religious, philosophical beliefs, sexual orientation, race);
untargeted scraping of facial images from the internet or CCTV footage to create facial recognition databases;
emotion recognition in the workplace and educational institutions;
social scoring based on social behaviour or personal characteristics;
some cases of predictive policing systems for individuals (e.g. systems that predict the likelihood or nature of criminal behaviour of natural persons);
AI systems that manipulate human behaviour to circumvent their free will;
AI systems used to exploit the vulnerabilities of people (due to their age, disability, social or economic situation).
High-risk AI Systems
Beyond the AI systems that are prohibited, there are some systems that are considered to be significantly risky to human health and rights. These AI systems will have to satisfy strict requirements:
risk-mitigation systems,
high quality of data sets,
logging of activity,
detailed documentation,
clear user information,
human oversight,
a high level of robustness, accuracy and cybersecurity
and a fundamental rights impact assessment before the system is put in the market by its deployers (notably, this includes banking and insurance applications).
High risk systems have been extended to include emotion recognition systems, and systems used to influence voter behaviour (the original EC proposal already included biometric identification systems, systems used in critical infrastructure, education, human resources, law enforcement, immigration control, administration of justice, as well as systems used to determine access to essential public and private services. See the EC proposal, Annexes II and III, for details).
Intermediate risk
Some systems are considered to pose an intermediate level of risk, and will have to satisfy some transparency requirements: notably, informing users when they are interacting with an AI system like a chatbot, or dealing with AI-generated content.
It is important to clarify that, while all other risk categories are mutually exclusive (i.e. a prohibited system cannot also be an intermediate risk system, and a high risk system cannot be a low risk system, this does not hold for high-risk and intermediate risk. An AI system can in fact be both high-risk, and intermediate risk as well.)
Low-risk AI
Low risk AI systems, like spam filters or inventory-management systems, have no requirements, but are encouraged to adhere to voluntary Codes of Conduct.
Generative AI systems have also not been forgotten.
New text addresses General Purpose AI systems, and places transparency requirements on these.
A two-tiered approach has been selected here, with lower-tier General Purpose AI (GPAI) systems having to satisfy certain transparency conditions, that include technical documentation, complying with EU copyright law and providing detailed summaries about the training data.
Higher-tier GPAI systems (those with “systemic risk”) must further comply with more stringent requirements, such as model evaluations, cybersecurity measures, risk mitigation, and energy efficiency.
The threshold to determine whether or not a GPAI system is lower or higher tier is currently determined by compute (the number of FLOPs required, at the moment 10^25 or higher), which includes just two of the current state-of-the-art systems (GPT-4 and Gemini).
The Act also builds in some flexibility in determining this threshold, with an AI Office charged with this task, taking into account not just the compute, but also number of business users and number of model parameters, as well as possible other criteria suggested by a scientific advisory board.
The rights of AI subjects
The AI Act now also includes a provision allowing natural or legal persons to make a complaint to the relevant market surveillance authority concerning non-compliance with the AI act, as well as the right to request an explanation for AI system decision output.
And what about the SMEs?
The AI Act also prefigures measures in support of innovation:
AI regulatory sandboxes,
testing of AI systems in real world conditions, under specific conditions and safeguards
actions to be undertaken to support SMEs and provides for some limited and clearly specified derogations
The costs of non-compliance
€35 million or 7% for violations of the banned AI applications,
€15 million or 3% for violations of the AI act’s obligations and
€7,5 million or 1,5% for the supply of incorrect information.
More proportionate caps on administrative fines for SMEs and start-ups in case of infringements of the provisions of the AI act.
Timeline for the implementation of the AI Act:
6 months after the AI Act is enacted, prohibitions are in force.
12 months after the AI Act is enacted, GPAI rules come into force, with Codes of Practice to transition until harmonized standards are in place.
24 months after the AI Act is enacted, the full law is enforced.
The EU AI Act represents a major milestone for the world of artificial intelligence. However, there are still details to be hammered out.
Here are some of our questions, which we hope will be answered in the coming weeks:
What is the exact definition of AI system used?
EP and EC press releases refer only to GPAI models, whereas the Council text also refers to foundation models. How are GPAI models defined? Does the Act cover Foundation Models as well? If yes, how are they defined, and how are they different from GPAI?
When discussing exceptions to the transparency requirements for GPAI, what is meant by open-source model,? Does this mean releasing just the model weights? Or does it also include the source code for the algorithm used in training the model, as well as the training data itself?
What happens to AI systems that are in place before the AI Act comes into force? Will, for instance, applications that are prohibited have to be removed? And what about legacy systems that fall into the high-risk category – will they be exempt from the high-risk requirements? If yes, for how long?
How will the AI Act be enforced, and who exactly is responsible? What is the effect of a complaint of non-compliance brought to the market surveillance authority by a natural person?
Do the Fundamental Rights Impact Assessments apply to all high-risk AI systems, or only to some?
Regarding the use of AI systems in law enforcement – what is the exact extent of the prohibitions on predictive policing and biometric categorization, and what are the precise exemptions. What are the safeguards put in place to prevent fundamental rights abuses when exemptions are granted?
What are the specific actions planned to be taken in support of SMEs? Are there any plans for facilitating compliance, by providing, for example, expertise, research, or infrastructure?