Artificial Intelligence
AI in Check: What the New EU Artificial Intelligence Act Means for You
Authors: Andrew J. Zammit & Hayley Ann Pisani
AI in Check: What the New EU Artificial Intelligence Act Means for You
11 min read
Authors: Andrew J. Zammit & Hayley Ann Pisani
Understanding the Prohibited AI Practices
Artificial intelligence (“AI”) is rapidly transforming our world, but with great power comes great responsibility. To ensure AI serves society ethically and safely, the European Union (“EU”) introduced the EU AI Act (“AI Act”). This law aims to set global standards for AI use, protecting individuals from harmful practices while promoting innovation. The AI Act officially came into force on the 1st of August 2024 and adopts a risk-based approach, meaning different obligations apply depending on the level of risk involved.
Artificial Intelligence is reshaping the way we live and work, but not all AI applications are created equal or even ethical. The AI Act is setting a global benchmark by identifying and prohibiting specific AI practices that pose unacceptable risks to individuals and society as illustrated in Article 5 of the Act. From manipulative techniques and social scoring to unauthorized biometric systems, these regulations aim to protect fundamental rights while fostering trust in AI technologies. Such prohibited practices are going to enter into force as of the 2nd of February 2025.
Whether you are a tech enthusiast, a business owner, or simply a citizen curious about the future, this Act will impact your life in meaningful ways. Here is everything you need to know about what is being prohibited and why it matters.
What is the EU’s Artificial Intelligence Act?
The AI Act is the world’s first comprehensive framework for regulating AI systems. It is applicable within the European Union (although its reach is beyond the strict geographical territory) and is designed to tackle the risks associated with AI while promoting its advantages.
The Act classifies AI systems into four risk categories:
Risk Hierarchy
- Unacceptable Risk: Practices that are outright banned.
- High Risk: AI systems allowed under strict conditions and oversight.
- Limited Risk: Systems requiring transparency but with fewer restrictions.
- Minimal Risk: AI systems, like chatbots or gaming algorithms, with no specific requirements.
This tiered approach ensures that more harmful AI practices face stricter rules, protecting people’s rights and safety.
What Practices are Banned?
As of 2nd February 2025, certain AI practices will no longer be permitted under EU law. Article 5 of the AI Act lists eight prohibited practices that are deemed to pose an unacceptable level of risk. These provisions are the first to come into force, highlighting their importance. According to Article 112 of the AI Act, the European Commission will review and possibly amend the list of prohibited AI practices and other related regulations annually.
Every four years, the Commission will evaluate and report on the need for amendments, the effectiveness of the supervision system, and the performance of the AI Office. It will also assess the resources of national authorities, the state of penalties, the number of new businesses entering the market, and progress on energy-efficient AI models. In its evaluations, the Commission will consider the positions of various bodies and may propose amendments based on technological developments and the impact of AI systems on health, safety, and fundamental rights.
Here is a closer look at what is being prohibited:
Subliminal, manipulative, or deceptive techniques – Article 5(1)(a)
Subliminal techniques operate beyond an individual’s conscious awareness with the intent and effect of significantly distorting behaviour in ways that may reasonably be expected to cause substantial harm. AI systems designed to covertly manipulate decisions, particularly those targeting vulnerable individuals, will be prohibited. This includes applications that exploit psychological vulnerabilities or encourage harmful behaviours.
The objective is clear: AI should empower users, not manipulate them. The regulation emphasizes user safety by banning technologies that cross ethical boundaries, while recognizing that circumstances beyond the control of the deployer or provider do not need to be considered. This approach strikes a balance between accountability and practical limitations, ensuring AI systems operate responsibly and transparently.
Techniques exploiting vulnerable groups in each case which materially distorts behaviour and risks significant harm – Article 5(1)(b)
The Act prohibits AI systems from exploiting vulnerabilities stemming from age, physical or mental disability, or socioeconomic circumstances. This applies where the objective or effect is to materially distort a person’s behavior in a manner reasonably likely to result in significant harm.
Importantly, legitimate uses, such as seeking help from a medical professional for mental health concerns, are not captured by this prohibition. However, harmful practices, like using AI to market predatory loans to low-income individuals or deceiving children into making unintended purchases, will be strictly forbidden.
The goal is to ensure AI technologies are developed and deployed ethically, prioritizing user protection, particularly for those in vulnerable positions.
Social scoring systems – Article 5(1)(c)
Inspired by dystopian depictions of society-wide surveillance, the EU AI Act explicitly bans social scoring systems. These systems assign individuals scores based on their behaviour, lifestyle, or personal characteristics, often determining access to services or opportunities. The regulation aims to prevent discrimination by prohibiting the evaluation or classification of individuals over time in a way that leads to detrimental, unjustified, or disproportionate treatment unrelated to the original context.
The dangers of social scoring have already been demonstrated. For instance, the Netherlands implemented a social scoring system that inadvertently increased discrimination, disproportionately affecting minority groups. While such systems are often associated with governments, they are not limited to public entities and could also be deployed by private actors.
Under the Act, any use of social scoring systems will not only be prohibited but may also result in significant fines. This prohibition underscores the EU’s commitment to safeguarding individual rights and ensuring fairness in AI deployment.
Predicting criminality based on profiling – Article 5(1)(d)
The EU AI Act also prohibits AI systems that assess or predict an individual’s risk of future criminal behaviour based on data patterns, or that evaluate personality traits and characteristics. Such systems will only be allowed when assessing criminal activity based on objective and verifiable facts directly linked to criminal conduct.
This restriction is designed to prevent individuals from being unfairly targeted or discriminated against due to biased or flawed algorithms. It ensures that AI does not perpetuate assumptions or inaccuracies when making risk predictions. Importantly, there must be a concrete and well-founded suspicion that a crime has been committed for such assessments to occur.
By imposing these limits, the Act reinforces the importance of protecting individuals’ rights and ensuring AI systems are deployed ethically and transparently in sensitive contexts.
What exactly is “Biometric Data”?
Before delving into the remaining prohibited practices, it is essential to define biometric data. Biometric data refers to personal data obtained through specific technical processing of an individual’s physical, physiological, or behavioural characteristics that enable their unique identification. Examples include fingerprints, facial recognition, iris scans, voice patterns, and behavioural traits such as gait or keystroke dynamics.
Under the General Data Protection Regulation (“GDPR”), biometric data is classified as a special category of personal data due to its sensitive nature. This classification reflects the significant privacy risks associated with its misuse, as biometric data can uniquely identify individuals.
The EU AI Act aligns with this understanding. Article 3(34) defines biometric data as “personal data resulting from specific technical processing relating to the physical, physiological or behavioural characteristics of a natural person, such as facial images or dactyloscopic data (fingerprint data).” Recital 14 further clarifies that biometric data can be used for the authentication, identification, categorisation, or even the recognition of emotions of natural persons.
Facial Recognition Databases – Article 5(1)(e)
The EU AI Act prohibits the creation or expansion of facial recognition databases through the untargeted scraping of facial images from the internet or CCTV footage. This restriction was significantly influenced by the case of Clearview AI, a company fined over $30 million for illegally building a database containing billions of facial images sourced from social media and other online platforms!
Clearview AI scraped these images without individuals’ knowledge or consent, violating their privacy and the principles of lawful biometric data processing. The company then used this database to offer facial recognition services to intelligence agencies and law enforcement, enabling them to identify individuals in photographs.
By banning such practices, the EU AI Act aims to prevent the unauthorized and unethical use of facial recognition technology, safeguarding individuals’ privacy and ensuring biometric data is handled responsibly.
Emotion Recognition in Sensitive Areas such as workplaces and education institutions – Article 5(1)(f)
The EU AI Act prohibits AI systems from inferring emotions in workplaces and educational institutions, except for medical or safety purposes. Emotion recognition systems, as defined under the Act, analyse biometric data to identify or infer emotions or intentions, such as happiness, sadness, anger, or surprise.
Recital 18 distinguishes between emotions and physical states—like fatigue or pain—which are not covered by this prohibition. For instance, systems detecting driver fatigue to prevent accidents are permissible. Additionally, merely detecting obvious expressions, gestures, or voice characteristics (e.g., a smile, raised voice, or hand movement) does not constitute emotion recognition unless used to infer deeper emotional states.
The restriction stems from concerns about the unreliability of such systems, as they often fail to capture the complexity of human emotions accurately. This measure aims to protect individuals from invasive and potentially flawed technologies, ensuring fairness and privacy in sensitive environments like workplaces and schools.
Biometric Categorisation – Article 5(1)(g)
The EU AI Act prohibits biometric categorisation systems that deduce or infer sensitive attributes such as race, political opinions, trade union membership, religious beliefs, sex life, or sexual orientation. These systems assign individuals to specific categories based on their biometric data, raising significant privacy and ethical concerns.
While biometric categorisation can include attributes like age, hair colour, or tattoos, systems that target sensitive categories are strictly banned unless their use is ancillary to another commercial service and strictly necessary for objective technical reasons. For example, filters on online marketplaces that allow users to preview products or tools on social media platforms that modify facial features for entertainment purposes may qualify as ancillary features if they are inseparable from the primary service.
Real-time biometric identification in public spaces – Article 5(1)(h)
The EU AI Act imposes strict limitations on the use of real-time remote biometric identification systems—technologies that capture, compare, and identify biometric data without significant delay. According to Article 3(42), these systems operate instantaneously or with minor delays, often enabling the simultaneous identification of multiple individuals without their active involvement, as noted in Recital 17.
Facial recognition and biometric tracking in public spaces present significant privacy challenges. To address these concerns, the Act restricts their use to circumstances, such as:
- Targeted searches for specific victims (e.g., missing persons);
- Prevention of specific, substantial, and imminent threats to life, physical safety, or terrorist attacks;
- Identifying suspects of serious crimes.
These exceptions are subject to strict safeguards and require appropriate authorization to ensure accountability and prevent misuse.
The regulation seeks to balance public security and individual privacy. This is particularly relevant in cities like London, one of the most surveilled cities in the world, where the pervasive use of facial recognition technologies continues to spark debate about the trade-offs between safety and civil liberties.
Enforcement and fines
Under Article 99(3) of the AI Act, noncompliance with the prohibition of the AI practices outlined above may result in administrative fines of up to €35 million or, if the offender is an undertaking, up to 7% of its total worldwide annual revenue for the preceding financial year, whichever amount is higher.
National market surveillance authorities will be tasked with ensuring compliance with the AI Act’s provisions on prohibited AI practices. These authorities will be required to report annually to the European Commission on any instances of noncompliance and the measures taken to address them.
The Global Impact of the EU AI Act
The EU AI Act is poised to set a global benchmark for regulating artificial intelligence, with other regions, including the United States and Asia, already considering similar frameworks. In contrast, the UK has not introduced a comprehensive AI regulatory framework and does not plan to do so. Instead, it advocates for a context-sensitive, balanced approach, relying on existing sector-specific laws to guide AI development. Meanwhile, Canada is progressing with the AI and Data Act (“AIDA”), which aims to protect Canadians from high-risk AI applications while promoting responsible AI practices that align with global standards. AIDA places emphasis on safety, human rights, and preventing reckless AI use. India, on the other hand, lacks specific legislation for AI governance, though the forthcoming Digital India Act is expected to focus on regulating high-risk AI applications. As governments and businesses worldwide adapt to these evolving regulations, the EU AI Act highlights Europe’s leadership in promoting ethical AI governance on the global stage.
Why It Matters
AI has incredible potential to improve our lives, but it also comes with its risks. The EU AI Act strikes a balance between fostering innovation and protecting people’s rights. By identifying harmful practices as illegal, it ensures AI serves as a tool for progress rather than a threat to our freedom. As the 2nd of February 2025 implementation date approaches, the world will be watching Europe set a new standard for responsible AI use.
Testimonial
‘The team at GVZH understand the requirements to adhere with EU and local legislation.’