Technology, Media & Telecommunications (TMT)
Navigating AI Accountability: The EU’s AI Liability Directive and Its Implications
Author: Nicholas Scerri
Navigating AI Accountability: The EU’s AI Liability Directive and Its Implications
8 min read
Author: Nicholas Scerri
As artificial intelligence (‘AI’) rapidly becomes a more fundamental aspect of human life and matures into a hallmark of human ingenuity, it becomes ever more important that the boundaries of this staple of the future are developed and clearly laid out for all to understand. The European Union (‘EU’) is stepping up to the challenge as it aims to address the complex landscape that AI presents. The AI Liability Directive, proposed by the European Commission in September 2022, is set to pioneer how we handle the consequences of AI malfunctions and mishaps. This Directive introduces clear liability rules, making it easier for victims to seek compensation and for businesses to understand their responsibilities. By setting out these rules across all member states, the EU aims to foster innovation in the AI field and to restore consumer trust in AI. The Directive aims to create a symbiotic relationship between technological advancement and accountability.
Background
The AI Liability Directive arises from the urgent need to address the growing prevalence of AI technologies across various sectors and the vague liability links which arise as a result.
A notable example to illustrate the difficulty is the use of AI in self-driving cars, which has sparked significant debate and scrutiny and raised much concern around the question as to who is liable when an AI system malfunctions. Despite promises of increased safety and efficiency, self-driving cars have encountered challenges, with reports of accidents attributed to a possible variety of persons (the hardware manufacturer, the software developer, or potentially the user). In a widely publicized incident, an autonomous Uber vehicle failed to detect a pedestrian crossing the road, resulting in a fatal collision. The back-up driver of the car was charged with negligent homicide as a result of this accident, as she was streaming video on her phone at the time, rather than paying attention to the road. An incident of this nature raises profound questions about accountability and the allocation of responsibility in cases where AI systems are involved in accidents. Further questions have been asked about the AI’s ability to make moral decisions such as choosing crashing into a young man or an elderly woman. Who decides the moral standard and how does the AI actualise it?
Similarly, AI-driven diagnostic tools used in healthcare have garnered attention for their potential to revolutionize medical practices. These tools analyze vast amounts of patient data to diagnose patients and recommend treatment plans. However, instances of misdiagnosis or faulty recommendations have surfaced, raising concerns about patient safety and the reliability of AI systems in critical healthcare settings. For example, there have been cases where AI algorithms incorrectly identified benign tumors as malignant, leading to unnecessary medical procedures and patient distress.
These real-life examples highlight the complexities and risks associated with AI technology. While AI has the potential to enhance safety, efficiency, and accuracy in various domains, its deployment also presents unique challenges in terms of liability and accountability. The current EU liability framework, anchored by the Product Liability Directive (‘PLD’) of 1985 and traditional civil law provisions regulating tort, has struggled to adapt to these challenges, resulting in legal uncertainty and diminished consumer trust in AI. Landmark initiatives such as the proposal for a directive on adapting non-contractual civil liability rules to artificial intelligence (the ‘AI Liability Directive’), published by the European Commission in September 2022, are critical for the establishment of clear rules and mechanisms to address these challenges and to promote responsible AI development and deployment.
What is New with the AI Liability Directive
The AI Liability Directive represents a significant development in the European Union’s approach for regulating civil liability where AI systems are involved. By setting consistent standards in relation to non-contractual civil liability for damages caused by AI systems, the Directive aims to encourage the development and use of reliable AI throughout the EU.
One of the most compelling changes introduced by the Directive is the introduction of a presumption of causality for AI Systems under Article 4. This provision diminishes the burden of proof on claimants by creating a rebuttable presumption that that any non-compliance with specific legal obligations (such as those imposed by the AI Act) is the cause of the harm suffered by the claimant, unless the defendant can prove otherwise. The presumption is enforced if three conditions are met: (i) the claimant shows that the defendant failed to comply a duty of care, which would have protected them against harm; (ii) it is reasonable to deduce that such failure to comply on the part of the defendant likely influenced the malfunction by the AI system; and (iii) it is reasonable likely that the output (or failure to produce an output) of the AI caused the damage the claimant has suffered..
The Directive also grants national courts the ability to order the disclosure of evidence related to high-risk AI systems (‘HRAIS’). This helps address the difficulties claimants face in identifying liable parties and proving their claims in the complex AI landscape. Article 3 of the AI Liability Directive allows courts to demand that providers or users share necessary and proportionate evidence about HRAIS suspected of causing damage, with protections for confidential information. If they do not comply, it can be presumed (unless proven otherwise) that the defendant did not meet their duty of care. However, for the court to order the disclosure of evidence related to HRAIS, claimants must meet the following conditions: (i) they present sufficient evidence to support their claim for damages; and (ii) they show they have taken all reasonable steps to obtain the evidence from the defendant prior to the order being made.
Implications on Stakeholders
This influential legislation will have far-reaching effects, influencing different parties in various ways. One of the main parties heavily impacted by the AI Liability Directive are AI manufacturers and developers. They will face increased compliance obligations and will need to invest significant resources into testing, certification, and documentation to ensure their AI products and services meet stricter safety standards. The new Directive increases the likelihood of liability for damages, potentially leading to more fines and reduced funds for reinvestment, possibly stifling innovation. On a positive note, manufacturers and developers might build greater trust with consumers through increased transparency about their AI systems, such as algorithms and performance metrics, which could also provide insights for improvements.
Consumers and end-users will benefit from enhanced protection under the AI Liability Directive. They will have stronger legal protections in the event of harm caused by AI-powered products or services, with a clearer pathway to bring claims for AI-related harm, including non-physical harm like economic losses. The presumption of causation for HRAIS lowers the burden of proof for consumers, making them more likely to win cases as the burden shifts to the defendant. The Directive also promotes greater transparency and aims to boost innovation, giving consumers access to better and more abundant products, enabling informed decisions.
Member State governments and legislators will be responsible for transposing and implementing the Directive into national law. This involves enacting new legislation or amending existing laws to align with the Directive and ensuring compliance. Government agencies will need to enforce compliance, which may include investigating incidents of harm caused by AI products and services and taking enforcement action if necessary. If suitable bodies do not exist to handle these tasks, new branches may need to be established. Member States must also coordinate to ensure the Directive’s minimum standards are consistently applied across the EU.
Challenges and Criticisms
One criticism of the AI Liability Directive is its minimum harmonization approach, which sets mandatory minimum standards of protection across the EU while allowing Member States to introduce stricter national laws. This approach raises concerns about legal inconsistency and confusion in liability standards, potentially impeding the free movement of AI products and services and creating legal uncertainty for businesses and consumers. Critics argue that a more harmonized approach would better achieve the Directive’s goals of fostering AI innovation and increasing legal certainty.
The Directive’s provision allowing courts to order the disclosure of evidence related to HRAIS has received mixed reactions. While intended to help claimants, this provision could make legal proceedings costly and time-consuming. The process of gathering and producing required evidence may be difficult and expensive, especially for smaller AI developers or those operating in fields where confidentiality is crucial. Additionally, defining “relevant evidence” could lead to disputes and delays in resolving liability claims.
Conclusion
The AI Liability Directive represents a significant step forward in addressing the complexities of the legal challenges that come hand in hand with legislation of frontier technology such as AI. By establishing clear liability rules and mechanisms for evidence disclosure, the Directive aims to enhance consumer trust and provide a more robust framework for seeking compensation for harm caused by AI systems. However, the Directive’s approach being criticised as minimal harmonization along with the potential burdens of evidence disclosure, highlights the ongoing struggle and challenges legislators face. As the European Union continues to refine its regulatory landscape, it will be crucial to balance the promotion of innovation with the protection of fundamental rights and interests. Ultimately, the AI Liability Directive sets the stage for a more accountable and transparent AI ecosystem, paving the way for responsible technological advancement and greater legal certainty across the EU.