Technology, Media & Telecommunications (TMT)
HAI-Risk, What Reward?
Authors: Andrew J. Zammit, Ann Bugeja & Nicholas Scerri
Authors: Andrew J. Zammit, Ann Bugeja & Nicholas Scerri
Artificial Intelligence’s development is shaping the present and future of the world, moulding industries and transforming lives. However, not all AI systems are created equally, creating the need to distinguish, regulate and potentially stifle certain developments and innovations. Whilst the development of AI creates boundless opportunities, it also creates a unique set of challenges that, if handled inadequately, could have serious negative consequences, as we shall consider below.
The European Union has taken a lead in aiming to “tame the beast” that is AI through the introduction of the EU AI Act (“AI Act”), which gained the force of law on the 1st August 2024 and will begin its rolling out on the 2nd February 2025. One such implemented measure is the hierarchy introduced by the AI Act intended to split and govern AI differently depending on the nature and application of the artificial intelligence. The AI Act differentiates between four types of AI:
- Prohibited AI
- High-Risk AI
- Limited Risk and
- Minimal Risk.
AI systems are split into these four categories depending on their potential impact on safety, fundamental rights and social well-being. As the title implies, for the purpose of this article the focus shall be on High-Risk AI systems and exploring whether the rewards related to the perils of high-risk AI justify the challenges to regulate and deploy such influential tools.
What is High-Risk AI?
The AI Act distinguishes between two different forms of High-Risk AI systems, the first of which are systems designed to function as safety components within products or systems, or that are standalone products or systems. These systems are governed by European Union harmonization legislation specified in Annex I of the AI Act and cover AI diagnostic tools used in healthcare, assisting in critical medical decisions, and autonomous machinery like self-operating robots or AI-driven vehicles, all of which have significant safety implications. The AI Act requires that these High-Risk AI systems undergo third-party conformity assessments in order to ensure compliance with the AI Act’s strict requirements, such as risk management, data governance, and transparency.
The second type of High-Risk AI system distinguished in the AI Act are systems intended for purposes outlined in the use cases listed in Annex III of the AI Act such as Critical Infrastructure Management, Education, Law enforcement, and Migration/Border Control. Examples of this second type of High-Risk AI include AI systems used to monitor and control energy grids or water systems, tools assessing student performance in educational settings, predictive policing or facial recognition technologies in law enforcement, and AI systems assessing visa applications or monitoring border activities. These systems are categorised as high risk because their failure or misuse could result in disproportionate harm to individuals or society, such as compromising safety, infringing on fundamental rights, or eroding public trust.
In a nutshell, a High-Risk AI system is a system where if a failure were to occur or misuse is identified, the consequence could result in significant harm to individuals or society. High-risk AI system deployers will have to establish additional safeguards to ensure that the AI operates in its intended manner, without producing unintended harmful effects. They must also establish human oversight by trained personnel, to monitor the system and intervene when necessary. Additionally, the data they input into the AI system must be accurate, relevant and free from biases to prevent unfair or harmful results. Deployers must keep their system performance under observation, log activity, and report risks or incidents to providers and authorities. Deployers also need to inform affected workers about the use of AI, comply with public registration requirements, and perform data protection impact assessments where necessary under the GDPR.
Potential with High-Risk AI
AI’s potential is limitless, offering revolutionary solutions to some of the most pressing issues experienced globally. In healthcare, AI-driven diagnostic tools are being used to detect diseases much earlier and with greater accuracy. There have been instances where such tools have proved their utility by identifying cancer cells months prior to any other technology at our disposal prior to AI. Such improvements in diagnostic efficiency and accuracy significantly improves the outcomes for patients as the technology can use predictive algorithms in imaging systems, allowing for an accelerated treatment timeline.
Furthermore, AI-powered tools have been used in law enforcement to enhance public safety. Law enforcement sometimes uses AI powered predictive policing tools such as PredPol, to assist with mitigating and anticipating crimes. Whilst heavy monitoring and regulation is essential for the proper use of such capable tools is necessary to the ensuring of no misuse such as biased decision-making, the ability to keep citizens safer as a whole and pre-empt crime is certainly a tool that is worth leveraging, as it has the potential to transform public safety and build safer communities by enabling proactive responses to criminal activity while upholding fairness and accountability through rigorous oversight.
Challenges and Risks
Despite possessing such an abundance of potential, High-Risk AI systems pose challenges that should not be overlooked. Ethical concerns regarding profiling and data protection remains of paramount importance to legislators and the people their laws govern alike. Historically, AI systems used for recruitment have displayed biases depending on the resumes the system has been trained with. For example, Amazon’s now-discontinued AI recruitment tool inadvertently penalised female applicants, reflecting the inherent risks of High-Risk AI systems. This occurred as a result of the system being trained on historical hiring data that predominantly reflected male-dominated recruitment patterns, leading it to rank male candidates higher than equally qualified female applicants. The discriminatory outcomes reflect historical hiring data that predominantly reflected male-dominated recruitment patterns, leading it to rank male candidates higher than equally qualified female applicants.
Furthermore, real-time biometric systems have conjured up concerns relating to people’s privacy and data protection. The cause for concern was not ignored by legislators and deemed any such practices to be prohibited unless it is for a very specific purpose deemed acceptable such as to prevent terrorist attacks and locate missing people. However, misuse of such predictive AI systems in areas like policing could lead to unjust profiling which disproportionately targets minority groups. Calls for strict limitations on such powerful AI tools are justified as the risk of inappropriate use could have drastically disproportionate negative effects.
The WAI Forward
To fully explore the transformative nature AI promises, it is imperative that there is robust legislation in place, a heavy task being undertaken by the European Union. The implementation of the AI Act, AI Liability Directive, Digital Services Act, NIS 2 Directive, and other regulatory frameworks represents a coordinated effort to establish a comprehensive legal landscape for artificial intelligence. The urgency to place parameters on the wide range of uses and consequences of AI has resulted in the EU experiencing a legislative scramble to codify the limitations on AI and provide developers and deployers parameters by which they can operate under whilst giving people security and reassurance that their rights will be protected whilst allowing them to benefit from the strides in technological advancements through the use of AI. These legislations dovetail by targeting different, yet interconnected, aspects of AI governance. Together, these laws aim to strike a balance between enabling technological advancements and safeguarding fundamental rights.
For legislation to reach its maximum intended effects, the collaboration between public authorities and private entities is of paramount importance as only a working symbiotic relationship between the two can ensure proper use of High-Risk AI. Collaboration through the sharing of diverse datasets and regular audits could be a key practice that will aid in the minimisation of misuse of a tool with such limitless potential (both good and bad). For example, AI applied in recruitment or in law enforcement will have to be subject to very specific scrutiny to avoid discrimination and violations of fundamental rights. The AI Act prioritises fairness and accountability, aiming to guarantee that high-risk AI systems deliver societal benefits without compromising safety or equity, making it possible to balance innovation with responsibility.
Conclusion
AI undoubtedly poses immense potential to reshape industries, revolutionise the way we tackle societal challenges, and enhance the overall well being of individuals worldwide. From medical diagnostics to public safety and infrastructure, these systems are tools too valuable to overlook. But their great strengths also demand great responsibility. Without strong regulation and oversight, the risks of misuse, bias, and disproportionate harm could undermine the very progress they promise to deliver. The EU AI Act strikes this important balance: high-risk AI systems can be leveraged for the greater good while minimizing their potential deleterious impacts through transparency, accountability, and strict oversight.
Looking ahead, it would appear that high-risk AI is worth keeping and further developing. Paired with comprehensive regulatory frameworks, these would form the basis of a world that is safer, fairer, and more innovative-where technology serves humanity responsibly.