On March 13 2024, the EU passed the much anticipated AI Act, the most comprehensive attempt to regulate AI globally. Tech experts say it’s “bitter sweet”. This is a framework which allows imaginative AI regulation.
It follows a risk-based approach, prohibiting some types of AI use cases outright and establishing a complex compliance mechanism for high-risk systems. It also issues specific provisions for general-purpose AI models. The AI act has a broad scope and may affect businesses worldwide and virtually every sector.
Banning Applications to Protect People’s Rights
The latest advancement and regulation in the EU law framework prohibits specific applications of AI that pose a risk to individual rights. These regulations prohibit the use of AI in biometric categorization systems that depend on sensitive attributes. They also forbid the widespread gathering of facial images from various sources such as the internet and CCTV footage to construct facial recognition databases.
Moreover, the rules outlaw the use of AI for emotion recognition in the workplace and educational settings, social scoring systems, predictive policing solely based on profiling or assessing individual characteristics, and AI designed to manipulate human behaviour or exploit vulnerabilities.
Law Enforcement Exemptions For Biometric Identifications
In principle, biometric identification systems by law enforcement are strictly prohibited except in specific, narrowly defined circumstances. The deployment of real-time RBI is only allowed if stringent safeguards are in place, such as temporal and geographical limitations, and obtaining prior judicial authorization.
Permissible scenarios may include targeted searches for missing persons or preventing terrorist activities. Utilizing these systems post-RBI is deemed a high-risk scenario.
Obligations For High-Risk Systems
Clear obligations are outlined for other high-risk AI systems, as they pose significant potential harm to various aspects such as health, safety, fundamental rights, and the rule of law. Examples of high-risk AI applications include those used in critical infrastructure, education, and vocational training. They also include applications in employment, essential private and public services, certain systems in law enforcement, migration and border management, as well as justice and democratic processes
These systems must undergo risk assessments, mitigate risks, maintain usage logs, ensure transparency and accuracy, and incorporate human oversight. Citizens will have the right to lodge complaints regarding AI systems and receive explanations about decisions made based on high-risk AI systems that impact their rights.
What Are Transparency Requirements In The EU Under This Act?
General-purpose AI (GPAI) systems, along with their underlying models, are mandated to adhere to specific transparency standards. These standards encompass compliance with EU copyright regulations and the publication of comprehensive summaries detailing the training data used.
More robust GPAI models, which have the potential to present systemic risks, will be subject to further obligations. These include conducting model evaluations, identifying and mitigating systemic risks, and reporting any incidents that occur.
Measures To Support Innovations and SMEs
Regulatory sandboxes and real-world testing will have to be established at the national level, and made accessible to SMEs and start-ups, to develop and train innovative AI before its placement on the market.
Unacceptable Risk
AI systems posing an “unacceptable” risk are prohibited under the AI Act. If currently in operation, these systems must be deactivated within six months of the AI Act’s effective date. AI systems are classified as posing an “unacceptable” risk when they:
- Employ manipulative, or deceptive techniques that distort behavior and hinder informed decision-making.
- Exploit vulnerabilities related to age, disability, or socio-economic circumstances.
- Utilize biometric categorisation systems to infer sensitive attributes such as race or political opinions.
- Employ social scoring mechanisms.
- Assess an individual’s risk of committing a crime solely based on profiling or personality traits, except when supplemented by human assessments based on specific criteria.
- Compile facial recognition databases by indiscriminately collecting facial images from the internet or CCTV.
- Infer emotions in workplace or educational settings, except for medical or safety purposes.
- Implement “real-time” biometric information identification in public places for law enforcement purposes, with significant exceptions.
Effective Implementation of This Act
Although the AI Act has been finalized, the focus now shifts towards its effective implementation.
“Now the emphasis is on ensuring its effective implementation and enforcement. This also entails renewed attention towards complementary legislation ”
stated by Risto Uuk, EU research lead at the nonprofit Future of Life Institute, in an interview with Euronews Next.
This additional legislation encompasses the AI Liability Directive, which aids in addressing liability claims arising from damage caused by AI-enabled products and services.
Additionally, the establishment of the EU AI Office is intended to streamline rule enforcement.
“The key factors to ensure the effectiveness of the law are ensuring that the AI Office is adequately resourced to fulfill its designated tasks and that the codes of practices for general-purpose AI are well-drafted, with input from civil society ”
The successful execution of the EU’s AI Act relies on a multifaceted strategy. Firstly, clear and consistent directives from the newly established AI Office are paramount to ensure universal comprehension and adherence to the regulations.
Secondly, promoting cooperation among member states is imperative for the consistent enforcement of the law throughout the EU.
Finally, maintaining an ongoing dialogue with developers and researchers will be critical in balancing responsible AI advancement and nurturing innovation within the fresh legal framework.