Update on EU AI Act - Upcoming Laws and Regulations
Release by the European Parlament
As part of its digital strategy, the EU aims to regulate artificial intelligence (AI) to create better conditions for the development and use of this innovative technology. AI can bring many benefits, such as improved healthcare, safer and cleaner transport, more efficient manufacturing, and cheaper and more sustainable energy supply.
Initially, in April 2021, the Commission proposed the first EU legal framework for AI. It recommends that AI systems, which can be used in various applications, be analyzed and classified based on the risk they pose to users. The different levels of risk are subject to more or less regulation.
Learn more about what artificial intelligence is and how it is used. What the Parliament expects from AI legislation
The European Parliament mainly wants to ensure that AI systems used in the EU are safe, transparent, traceable, non-discriminatory, and environmentally friendly. AI systems should be monitored by humans and not by automation to prevent harmful outcomes.
The Parliament also wants to establish a technology-neutral, uniform definition of AI that could be applied to future AI systems.
Learn more about the Parliament's work on AI and its vision for the future of AI. Artificial Intelligence Act: a risk-based approach
The new regulations establish obligations for providers and users that are based on the risk posed by the AI system. Although many AI systems pose minimal risk, they must be assessed. Unacceptable risk
AI systems pose an unacceptable risk when they are considered a threat to people. These AI systems will be banned. They include:
Cognitive behavioral manipulation of individuals or specific vulnerable groups, for example, voice-activated toys that promote dangerous behavior in children;
Social scoring: classification of people based on behavior, socioeconomic status, and personal characteristics;
Biometric identification and categorization of natural persons;
Real-time remote biometric identification systems, such as facial recognition.
Some exceptions may be allowed for law enforcement purposes. Real-time remote biometric identification systems will be permissible in a limited number of serious cases. Systems for subsequent biometric remote identification, where identification occurs with significant delay, may be allowed for the pursuit of serious crimes and only after judicial approval. High-risk AI systems
AI systems that pose a high risk to the health and safety or to the fundamental rights of natural persons are considered high-risk and are divided into two main categories.
AI systems used in products are subject to EU product safety regulations. These include toys, aviation, vehicles, medical devices, and elevators.
AI systems that fall into specific areas and must be registered in an EU database:
Management and operation of critical infrastructure; General and vocational education; Employment, workforce management, and access to self-employment; Access to and use of essential private and public services and benefits; Law enforcement; Management of migration, asylum, and border controls; Assistance in interpreting and applying laws.
All high-risk AI systems will be assessed before being placed on the market and throughout their entire lifecycle. Citizens will have the right to file complaints about AI systems with the competent national authorities. Transparency requirements
Generative Foundation models like ChatGPT are not classified as high-risk but must meet transparency requirements and comply with EU copyright law:
Disclosure that the content was generated by AI;
Design of the model to prevent it from generating illegal content;
Publication of summaries of copyrighted data used for training.
AI systems with general-purpose use and significant impacts that could pose a systemic risk, such as the advanced AI model GPT-4, would need to be thoroughly assessed, and all serious incidents would have to be reported to the Commission.
Content created or altered by AI – images, audio, or video files (e.g., deepfakes) – must be clearly marked as AI-generated so users know when they encounter such content. Promoting innovation
The Act aims to provide SMEs and startups with the opportunity to develop and train AI models before they are released to the general public.
For this reason, national authorities must provide companies with a testing environment that simulates real-world conditions. Next steps
The agreed text will be formally adopted at one of the next plenary sessions of the Parliament. It will be fully applicable 24 months after coming into force, but some parts will apply sooner:
The ban on AI systems that pose unacceptable risks will apply six months after coming into force;
The codes of conduct will apply nine months after coming into force;
Regulations for general-purpose AI systems that need to meet transparency requirements will apply twelve months after coming into force.
High-risk systems will have more time to meet the requirements; the obligations concerning them will apply 36 months after coming into force.