New liability rules for artificial intelligence in the European Union

2 November 2022 | Knowledge, News

There is a lot going on in the EU about Artificial Intelligence

Artificial Intelligence (AI) is at the heart of the EU’s strategy for creating a digital single market. In this context, a number of EU legal documents have been emerging for several years, such as the White Paper on Artificial Intelligence of February 2020 or the European Parliament’s resolutions on ethical framework, civil liability and intellectual property rights for AI of October 2020.

In April 2021, the European Commission presented a revolutionary proposal for a regulation on AI (Artificial Intelligence Act), laying the foundations for a legal framework for the use of AI within the European Union. Legislative work on the AI Act is already at an advanced stage and the document is expected to come into force at any moment.

AI system output and civil liability

For some time, the Union has also been working on the issue of regulating civil liability in the context of AI. A few years ago, the European Parliament drafted a proposal for a regulation on this issue, but the draft did not ‘take hold’.

Regulations apply directly in every Member State, in the exact same way, whereas civil liability regimes vary greatly from one EU country to another. The proposed wording of the regulation was unfortunately completely incompatible with several of these regimes (including Poland’s). A much better way for the EU to regulate the issue of liability is via a directive (which sets out certain standards and mechanisms, which must then be implemented by each Member State in a manner appropriate to its own law). This is precisely the mechanism that was resorted to this time.

On 28 September 2022, the European Commission adopted two proposals leading to the regulation of AI liability. One concerns the modernisation of existing rules on the strict liability of manufacturers for defective products, whereas the other proposes a new, separate directive on AI liability.

Artificial Intelligence Liability Directive

By its very title, the Artificial Intelligence Liability Directive (AILD) indicates that it concerns non-contractual liability.

In legal-speak, AILD primarily regulates tort liability or, to put it even more simply, liability for damage arising from random events or incidents between entities not bound by a contract. This is necessary in a situation where we are indeed surrounded by AI.

So what torts can AI commit against us? For example, an autonomously driven car hits a pedestrian in a zebra crossing. An AI-controlled drone destroys a parcel in transit by dropping it from too great a height. An AI system handling a company’s debt collection misidentifies a debtor and denies them access to services. An AI system for generating personalised medicines advises us to take a medicine that then causes harm. There may be many similar examples.  AILD regulates liability in precisely these types of situations.

However, the Directive does not regulate contractual liability. This means that, for example, if an organisation buys an AI system from an IT vendor and that system fails, then (as a general rule) such organisation has nothing to look for in the AILD and rather must seek redress via a well-written agreement, prepared by a lawyer who understands AI matters.

Presumption of causality at the core of AILD

Fundamental to the AILD is its Article 4, in accordance with which (subject, of course, to a number of specific prerequisites), if an injured person brings a compensation action to court for harm caused by AI, courts should presume the causal link between the fault of the defendant using AI and the AI system’s output or failure to produce an output which gave rise to damage. So, to make this simpler, it is the duty of the entity using AI to show that it should not be held liable for the harm caused by its AI, and not the other way around (because it is too challenging or too expensive for the injured person to do so).

AILD alleviates the burden of proof for victims

Courts hearing cases for compensation for damage caused by AI will be allowed to order the defendant to disclose relevant evidence even if the injured person (the claimant) did not request disclosure or was not aware of its existence at all.

AILD’s overarching goal is to make it as easy as possible for ‘ordinary people’ affected by malfunctioning AI used by businesses, including large corporations, to seek compensation. It is up to the beneficiaries of AI to show that it was not the errors in their solutions that caused the damage.

Remarkably, the AILD introduces regulations directly in reference to the AI Act and is based on the same grid of concepts. It differentiates liability issues according to the type of risk of system application that we are dealing with (high-risk vs. non-high-risk AI systems).

Thus, in relation to non-high-risk AI systems, the presumption of causality only applies if the court considers that it is excessively difficult for the claimant to prove a causal link. However, for high-risk AI systems five requirements are laid down and it is only where any one of them is not complied with that the presumption of causality may be deemed to have been met.

What next

The Commission’s proposals now need to be adopted by the European Parliament and the Council. The publication of the Commission’s draft legislation will open discussions at EU and national levels, which should lead to the best possible alignment of legislative solutions with ‘life’.

 

Any questions? Contact the authors

Piotr Kaniewski

Paulina Perkowska

 

Latest Knowledge

Dividend advances

Limited liability companies often exercise the option to pay dividend advances.

Contact us: