The regulations were negotiated with member states in December 2023 and approved by parliamentarians with 523 votes in favor, 46 against, and 49 abstentions.
It aims to protect fundamental rights, democracy, the rule of law and environmental sustainability from high-risk AI, while fostering innovation and establishing Europe as a leader in this field. The regulation establishes obligations for AI based on its potential risk and level of impact.
banned applications
The new rules ban certain AI applications that threaten citizens’ rights, such as biometric classification systems based on sensitive characteristics and the untargeted scraping of facial images from the internet and surveillance camera footage to create facial recognition databases. . Emotion recognition, social scoring, and predictive policing in workplaces and schools (if based solely on profiling individuals or assessing their characteristics), and AI that manipulates human behavior or exploits people’s vulnerabilities, will also be prohibited. .
Law enforcement immunity
The use of biometric identification systems (RBI) by law enforcement agencies is generally prohibited except in narrowly defined situations, as listed in an exhaustive list. “Real-time” RBI can only be deployed if strict safeguards are met. For example, their use is limited in time and geographic scope and requires specific prior judicial or administrative permission. Such uses may include, for example, targeted searches for missing persons or the prevention of terrorist attacks. Reactive use of such systems (“reactive remote RBI”) is considered a high-risk use case and requires criminal judicial authorization.
Obligations for high-risk systems
Clear obligations are also foreseen for other high-risk AI systems (due to serious potential harm to health, safety, fundamental rights, the environment, democracy, and the rule of law). Use cases for high-risk AI include critical infrastructure, education and vocational training, employment, essential private and public services (e.g. health care, banking), certain systems in law enforcement, immigration and border control, justice and democracy. processes (e.g. influencing elections). . Such systems must assess and mitigate risk, maintain usage logs, be transparent and accurate, and ensure human oversight. Citizens will have the right to file complaints about AI systems and be held accountable for decisions based on high-risk AI systems that affect their rights.
Transparency requirements
General purpose AI (GPAI) systems and their underlying GPAI models must meet certain transparency requirements, including compliance with EU copyright law and publishing a detailed summary of the content used for training. More powerful GPAI models that have the potential to pose systemic risks have additional requirements, such as performing model assessments, assessing and mitigating systemic risks, and reporting incidents.
Additionally, artificial or manipulated image, audio, or video content (“deepfakes”) must be clearly labeled as such.
Innovation/SME support measures
To develop and train innovative AI before bringing it to market, regulatory sandboxes and real-world testing must be established at the national level and made accessible to small businesses and startups.
Quote
In Tuesday’s plenary debate, Brando Benifei (S&D, Italy), Co-Rapporteur of the Internal Market Committee, said: “The world’s first binding law on artificial intelligence to reduce risks, create opportunities, fight discrimination and bring transparency has finally been enacted. Thanks to Parliament, AI is unacceptable in Europe. The AI Office will be set up to help companies start complying before the rules come into force. have ensured that human and European values are at the heart of AI development.”
Dragos Tudrace, Co-Rapporteur of the Committee on Civil Liberties (Renew, Romania), said: We have linked the concept of artificial intelligence to the fundamental values that form the foundation of society. However, much work lies ahead beyond the AI Act itself. AI will force us to rethink the social contracts at the heart of our democracies, our educational models, our labor markets, and the way we wage war. The AI Act is the starting point for a new governance model built around technology. We must now focus on putting this law into practice. ”
next step
The regulations are still undergoing final checks by lawyers and linguists, and are expected to be finally adopted (through the so-called errata process) before the end of Congress. The law also needs to be formally approved by the council.
The Regulations will come into force 20 days after publication in the Official Journal and become fully applicable 24 months after entry into force. However, the following are excluded. The prohibition on prohibited conduct will apply six months after its effective date. Code of Conduct (9 months into force). General purpose AI rules, including governance (12 months after effective date). Obligation for High Risk Systems (36 months).
background
The Artificial Intelligence Act is a direct response to citizen proposals from the Council for the Future of Europe (COFE). Most specifically, Proposal 12(10) Regarding strengthening the EU’s competitiveness in strategic areas, Proposal 33(5) about a safe and trustworthy society, including countering disinformation and putting humans in ultimate control. Proposal 35 (3) while ensuring human supervision; (8) Use AI trustably and responsibly, put safeguards in place, be transparent, and Proposition 37(3) About using AI and digital tools to improve access to information for citizens, including people with disabilities.