Technology, Media & Telecommunications (TMT)
AI Act…. Only One Step Away
Author: Jackie Mallia
Author: Jackie Mallia
On the 21st May 2024, the Council gave the green light to the AI Act (‘the Act’) which harmonises the rules on artificial intelligence (‘AI‘) across the European Union and could set a standard for the regulation of AI across the world. The Council’s approval is one of the final steps for the AI to come into effect – the Act will enter into force twenty days after publication in the EU’s Official Journal, which is likely occur in June 2024.
As with other legislation in the field of innovative technology, the Act aims at encouraging investment and innovation related to artificial intelligence in Europe, whilst protecting fundamental human rights of EU citizens. It categorizes AI systems into four categories of risk, namely: unacceptable, high, limited, and minimal risk and imposes varying degrees of regulation on each category.
The majority of obligations are generally imposed on providers of high-risk AI systems (those placing on the market or putting into service high-risk AI systems in the EU, even if they are based in a third country, and even where it is merely the AI system’s output which is used in the EU). Obligations are also imposed on users of AI systems which deploy an AI system in a professional capacity.
Unacceptable Risk
The use of the AI systems which fall within the ‘unacceptable’ tier is completely prohibited within the EU. This tier refers to AI application types that are incompatible with EU values and fundamental rights and includes the use of AI systems in the following manner:
- Utilizing subliminal, manipulative, or deceptive methods to alter behavior and hinder informed decision-making, leading to significant harm;
- Exploiting vulnerabilities associated with age, disability, or socio-economic status to manipulate behavior, resulting in considerable harm;
- Implementing biometric categorization systems to infer sensitive attributes (such as race, political views, trade union membership, religious or philosophical beliefs, sex life, or sexual orientation), except for labeling or filtering legally obtained biometric datasets or when law enforcement categorizes biometric data;
- Employing social scoring to evaluate or classify individuals or groups based on social behavior or personal traits, resulting in negative or unfavorable treatment of those individuals;
- Predicting the likelihood of an individual committing criminal offenses based solely on profiling or personality traits, unless used to support human assessments grounded in objective, verifiable facts directly related to criminal activity;
- Creating facial recognition databases by indiscriminately scraping facial images from the internet or CCTV footage;
- Inferring emotions in workplaces or educational institutions, except for medical or safety purposes;
- Conducting ‘real-time’ remote biometric identification (RBI) in publicly accessible areas for law enforcement, except in cases of:
- Searching for missing persons, abduction victims, or individuals who have been trafficked or sexually exploited;
- Preventing an imminent and substantial threat to life, or a foreseeable terrorist attack;
- Identifying suspects in serious crimes (such as murder, rape, armed robbery, narcotics and illegal weapons trafficking, organized crime, and environmental crime).
High Risk
AI applications that are classified as ‘high risk’ are those which could adversely impact the health and safety of individuals, their fundamental rights, or the environment. Such AI systems are highly regulated under the AI Act and must adhere to specific requirements prior to being marketed and used in the EU.
AI systems which are related to the safety components of already regulated products, and which are already subject to third-party assessments, such as AI applications integrated into medical devices or vehicles, would fall into this category and are regulated under Annex III of the AI Act.
Other stand-alone AI systems classified as high-risk are outlined in Annex II of the AI Act and include:
- biometric and biometrics-based systems, e.g. biometric identification of individuals,
- management and operation of critical infrastructure, e.g. energy supply,
- education and vocational training, e.g. student assessments in educational institutions,
- employment and workforce management, e.g. performance evaluation,
- access to essential private and public services and benefits, e.g. credit scoring and emergency services dispatch,
- law enforcement, e.g. evaluating the reliability of evidence,
- migration, asylum, and border control management, e.g. assessing a person’s security risk or examining applications for asylum, visas, or residence permits,
- administration of justice and democratic processes, e.g. use in political campaigns.
Providers of high risk AI systems are required to:
- Establish a risk management system throughout the system’s lifecycle;
- Ensure that datasets use are sufficiently representative free of errors (to the extent possible);
- Provide technical documentation to demonstrate compliance;
- Ensure that the AI system can keep records to enable it to automatically record events which may be useful to identify risks and modifications throughout the system’s lifecycle;
- Provide instructions for use to downstream deployers;
- Design their high risk AI system to allow deployers to implement human oversight;
- Ensure that the high risk AI system has appropriate levels of accuracy, robustness, and cybersecurity;
- Establish a quality management system to ensure compliance.
Low Risk and Minimal Risk
Limited risk AI systems are generally far less regulated and are normally just required to ensure transparency to the effect that end-users must be aware of the fact that they are interacting with AI (chatbots and deepfakes). Minimal risk AI systems, on the other hand, are unregulated.
General Purpose AI (‘GPAI’)
The AI Act also regulates GPAIs, which are AI systems that have a broad range of potential applications, both intended and unintended by their developers. These systems can be utilized for various tasks across different fields, often without needing significant modifications. All GPAI model providers must provide technical documentation, instructions for use, are required to comply with the Copyright Directive, and publish a summary about the content used for training. If the GPAI model presents a systemic risk, the provider must also carry out certain evaluations and testing, must report serious incidents and ensure cybersecurity protections.
Penalties
Penalties for non-compliance with the AI Act can be heft and could be imposed on providers, deployers, importers, distributors, and notified bodies. The highest fines related to violations involving prohibited systems and these can amount to €35,000,000 or 7% of global annual turnover for the previous financial year, whichever is greater. Failure to abide by other obligations of the AI Act can attract fines of up to 15 000 000 EUR or, if the offender is an undertaking, up to 3 % of its total worldwide annual turnover for the preceding financial year, whichever is higher. At the lowest end of the spectrum, fines up to €7,500,000 or 1% of global annual turnover for the previous financial year, whichever is greater, relate to the provision of incorrect, incomplete, or misleading information.
Timelines
Following publication of the AI Act in the Official Journal, the Act will apply in accordance with the following timelines:
- prohibited AI systems – 6 months
- GPAI – 12 months
- high risk AI systems under Annex III – 24 months
- high risk AI systems under Annex II – 24 months
Should you have any queries about how the AI Act will affect your AI system, or your obligations in relation to the use of AI systems, please don’t hesitate to contact us on tmt@gvzh.mt