1.4. Safety and Liability AI Initiatives

AI in many of its aspects comes with promises and risks; it goes the same way for safety and liability. Policy initiatives started in 2015, the European Parliament set up a working group with the primary aim of drawing up “European” civil law rules on emerging technologies and robotics. This working group delivered a draft report setting out a series of recommendations on civil law rules on robotics. In 2016, the EP continued its initiatives by commissioning a study on European Civil Law Rules for Robotics. The study assessed the main challenges that emerging technologies raise for the civil law landscape. Following this analysis, the EP adopted in 2017 a resolution with recommendations to the Commission on Civil Law Rules on Robotics. A year later, the EPRS delivered a study on ‘a common EU approach to liability rules and insurance for connected and autonomous vehicles – European added value assessment’. In the aftermath, the EC set up an expert group on liability and new technologies

1.4.1. Report on Liability for Artificial Intelligence and other emerging technologies

On 27th November 2019, the expert group on Liability and New technologies released its report on liability for AI and other emerging technologies. The report investigates the civil liability challenges raised by digital technologies and puts forward recommendations on how to adapt the current legal framework on liability. The report provides a state-of-the-art analysis of the existing laws in Europe which deals with liability for emerging technologies. Besides, some harmonised legislations’ liability regimes still vary greatly from one Member State to another. The traditional liability rules are not the best fit to meet the challenges raised by the emerging technologies as they are part of a legal corpus written decades or centuries ago. The notion of damage, causal link and fault required in tort law can become tricky proof in a case involving emerging technologies.
The Product Liability Directive is a cornerstone of the EU harmonisation effort on liability, it is based on the principle that the producer (broadly defined) is liable for damage caused by the defect in a product they have put into circulation for economic purposes or in the course of their business. Even if the Directive was drafted in a technological neutral way, the report points out that some key elements are today inadequate for addressing the potential risks of emerging digital technologies. This includes the scope of the directive and the notion of product and defect. There are also new procedural challenges associated with emerging technologies. In a second part, the report points out the key concepts underpinning classical liability regimes which would need legal clarification given the emerging technologies’ specificities. It also establishes and expands on new specific rules, principles and concepts which might be necessary to adopt. For these reasons, the report provides further recommendations such as setting strict liability (without fault) for certain types of emerging digital technologies, rules to protect against damage to the data, setting up a reversal of proof system in certain circumstances, that liability should lie with the one that has the more control over the operation’s risks.
The report concludes that the one-size-fits-all approach is not compatible with the wide variety of liability regimes and the complexity of the diverse emerging technologies.

1.4.2. Report on the safety and liability implications of Artificial Intelligence, the Internet of Things and robotics

The report from the EC was released in February 2020. The report accompanied the White Paper on AI presented earlier. It assesses the relevant legal framework and identifies the challenges and uncertainties of liability and safety aspects of AI.

Safety

The report concludes that the product safety legislation is adequate as it already benefits from a flexible scope of protection for the risks arising for products but that new provisions should be introduced to complement the framework to bring more legal certainty and solve the challenges identified. AI challenges include complex value chains, complex products, services, and systems, opacity, data dependency, connectivity, risks for mental health of users, autonomy and self-learning features. To address these several measures are put forward in the report. It goes from a new risks assessment procedure, setting up a human oversight through the whole life-cycle of the AI systems, specific obligations for mental safety risks for users. The report outlines that the requirements for transparency of algorithms, as well as for robustness, accountability and when relevant, human oversight and unbiased outcomes should be established in combination with an ex-post mechanism of enforcement.

Liability

Persons having suffered harm caused with the involvement of AI systems need to enjoy the same level of protection as persons having suffered harm caused by other technologies. The report points out that the scope of the Product Liability Directive and the notion of putting into circulation should also be further clarified to better reflect the characteristics of AI systems and ensure legal certainty for the economic actors. The allocation of responsibilities between the different actors must be improved, the fault-based liability schemes might not be adapted for the AI systems, the level of protection for individuals must be guaranteed whether caused by AI-systems or not. Finally, the report outlines the importance of adopting a common approach at the EU level.

1.4.3. European Parliament recent initiatives

Resolution on automated decision-making processes: ensuring consumer protection and free movement of goods and services (February 2020)

In this resolution, the EP urges the EC to bring forward proposals to adapt the EU’s safety rules both specific and general. It also further stresses the need for a risk-based approach and for a revision of the Product Liability Directive to ensure a functioning internal market ensuring clarity for the private sector, trust, and protection for consumers.

Draft report with recommendations on a civil liability regime for AI (April 2020)

The EP published a draft report with recommendations to the Commission on the adoption of a Civil liability regime for artificial intelligence. It formulates a genuine draft legislative text with concrete legal provisions. It suggests adopting a principle-based, future proof and horizontal legal framework. The report confirms that the wheel must not be reinvented and that the liability and safety framework constitutes a good starting base but must be complemented with specific rules for AI systems, especially in relation to opacity. It expands on the risks categorisation of AI systems and the format that the future EU legislation should take: regulation and annexes.

Study on Liability for Artificial Intelligence (July 2020)

The study by Andrea Bertolini underlines how difficult it is to define and classify AI, and criticises the one-size-fits-all dynamic. Instead, he suggests adopting a harmonised and uniform European approach to AI and liability, revising the product safety liability in a more technology-neutral approach and adopting more specific ad-hoc regimes to tackle all the challenges that diverse technologies are creating following a risks management approach.

Resolution on a Civil Liability Regime for Artificial Intelligence (October 2020)

The resolution calls for a revision of the Product Liability Directive and legal certainty about the liability chain for AI systems. The EP considers that operator liability rules should apply to all types of AI system operations, regardless of the location of the operation and whether it is of a physical or virtual nature. Compulsory insurance and strict liability for operators of high-risk AI-systems causing any harm were put forward. The revisions of the list of high-risk systems should occur every 6 months. The other non-high-risk AI systems should be governed by a fault-based liability with due diligence exoneration.

Authors:
Lidia Dutkiewicz, Emine Ozge Yildirim, Noémie Krack from KU Leuven
Lucile Sassatelli from Université Côte d’Azur