June 15, 2023
Artificial Intelligence (AI) has been a significant driver of technological innovation for several years now. Its applications range from autonomous vehicles and predictive analytics to personalized marketing and advanced healthcare diagnostics. However, with the rapid advancement and widespread use of AI, there has been a growing need for a comprehensive legal framework to govern its use and address the ethical, legal, and societal implications.
In April 2021, the European Commission took a bold step in this direction by proposing the first-ever legal framework on AI, known as the Artificial Intelligence Act (AI Act). The draft was published with the aim of ensuring that AI systems are used in a way that respects European values and regulations, and it has been discussed and amended since then.
Today, we find ourselves at a pivotal moment in the journey of the AI Act. The European Parliament has just adopted its position on the legislation, marking a key step towards setting unprecedented restrictions on how companies use artificial intelligence. This move solidifies Europe’s position as a de facto global tech regulator.
The AI Act is now closer than ever to becoming law, and as we approach the final stages of its adoption, it is crucial for businesses and individuals involved in the development, deployment, and use of AI systems to understand the implications of this ground-breaking legislation.
In this three-part series, we will delve into the details of the AI Act, starting with a general overview of its purpose, general implications, prohibited AI practices, and the concept of high-risk AI systems. This first part is a brief yet comprehensive overview of the AI Act, serves as a basis for the following discussion in the specific actual steps required for complying with its provisions.
Purpose of the AI Act
The AI Act is a response to the need for a legal framework that can keep up with the rapid advancements in AI technology. It aims to:
provision of clear and adequate information to users, and appropriate human oversight measures.
General Implications of the AI Act
The AI Act has far-reaching implications for all stakeholders involved in the AI lifecycle, from developers and deployers to users and affected parties. Here are some of the key implications:
Scope and Applicability of the AI Act
The AI Act applies to a broad range of entities and activities, and similarly to the GDPR it has extraterritorial scope. That means, the AI Act applies to AI systems placed on the EU market, put into service, or used in the EU, regardless of whether the provider is established in the EU. This wide scope of applicability reflects the EU’s intention to ensure that all AI systems that affect EU citizens and businesses comply with the AI Act.
In terms of material scope, the AI Act defines an ‘AI system’ as software that is developed with one or more of the following AI techniques and approaches, and for a given set of human-defined objectives, can generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with:
The AI Act adopts a risk-based approach in two key aspects. First, it uses this approach to determine which AI systems fall under its scope, with stricter requirements applying to high-risk AI systems. In addition, certain categories of AI utilizations are completely prohibited due to their potential “unacceptable” risk to EU residents’ freedoms and rights. While this approach aims to avoid overregulation and allow for innovation in low-risk AI systems, it means that AI systems that operate in significant fields such as health, banking, insurance, governmental activities, public safety and health, etc., are fully exposed to the implications of the regulation, and need to comprehensively adapt their conduct to its requirements.
Second, within the category of high-risk AI systems, the Act requires the provider to conduct a risk analysis to identify the inherent risks of the specific AI system, in order to determine the concrete measures that need to be implemented. The higher the potential risk to safety and fundamental rights, the stricter the requirements for the AI system.
Prohibited AI Practices
The AI Act prohibits certain AI practices that are considered to create an unacceptable risk. These include AI systems that deploy subliminal techniques beyond a person’s consciousness to materially distort their behaviour in a manner that could cause physical or psychological harm. It also prohibits AI systems that exploit any vulnerabilities of a specific group of persons due to their age, physical or mental disability, to materially distort their behaviour in a manner that is likely to cause them physical or psychological harm. AI systems used for social scoring by public authorities are also prohibited.
High-Risk AI Systems
As mentioned above, the AI Act introduces restrictive rules for AI systems that are considered high-risk. These systems are subject to stringent obligations before they can be put on the market, including conformity assessments and certification procedures. High-risk AI systems include those used for critical infrastructures, educational and vocational training, employment, workers management and access to self-employment, essential private and public services, law enforcement, migration, asylum and border control management, and administration of justice and democratic processes.
For high-risk AI systems that are safety components of products or systems, or which are themselves products or systems, existing sector-specific legislation, such as the Medical Devices Regulation or the Machinery Directive (i.e., require CE mark), will continue to apply, but will include additional AI-related obligations and restrictions.
The AI Act outlines several requirements for high-risk AI systems before they can be put on the market. These requirements are designed to ensure that these systems are safe, reliable, and respect fundamental rights. Here are the key requirements:
the importance of ensuring that AI systems do not lead to discriminatory outcomes and that they respect the principle of equal treatment.
In the next part of this series, we will dive deeper into the requirements for high-risk AI systems and focus especially on the conformity assessment procedures, and the obligations of different actors involved in the AI system lifecycle. Stay tuned for more insights on the EU’s forthcoming AI Act.
This document is intended to provide only a general background regarding this matter. This document should not be regarded as binding legal advice, but rather a practical overview based on our understanding. APM & Co. is not licensed to practice law outside of Israel.
APM Technology and Regulation Team.