New liability rules for artificial intelligence in the European Union

2 November 2022 | Knowledge, News

There is a lot going on in the EU about Artificial Intelligence

Artificial Intelligence (AI) is at the heart of the EU’s strategy for creating a digital single market. In this context, a number of EU legal documents have been emerging for several years, such as the White Paper on Artificial Intelligence of February 2020 or the European Parliament’s resolutions on ethical framework, civil liability and intellectual property rights for AI of October 2020.

In April 2021, the European Commission presented a revolutionary proposal for a regulation on AI (Artificial Intelligence Act), laying the foundations for a legal framework for the use of AI within the European Union. Legislative work on the AI Act is already at an advanced stage and the document is expected to come into force at any moment.

AI system output and civil liability

For some time, the Union has also been working on the issue of regulating civil liability in the context of AI. A few years ago, the European Parliament drafted a proposal for a regulation on this issue, but the draft did not ‘take hold’.

Regulations apply directly in every Member State, in the exact same way, whereas civil liability regimes vary greatly from one EU country to another. The proposed wording of the regulation was unfortunately completely incompatible with several of these regimes (including Poland’s). A much better way for the EU to regulate the issue of liability is via a directive (which sets out certain standards and mechanisms, which must then be implemented by each Member State in a manner appropriate to its own law). This is precisely the mechanism that was resorted to this time.

On 28 September 2022, the European Commission adopted two proposals leading to the regulation of AI liability. One concerns the modernisation of existing rules on the strict liability of manufacturers for defective products, whereas the other proposes a new, separate directive on AI liability.

Artificial Intelligence Liability Directive

By its very title, the Artificial Intelligence Liability Directive (AILD) indicates that it concerns non-contractual liability.

In legal-speak, AILD primarily regulates tort liability or, to put it even more simply, liability for damage arising from random events or incidents between entities not bound by a contract. This is necessary in a situation where we are indeed surrounded by AI.

So what torts can AI commit against us? For example, an autonomously driven car hits a pedestrian in a zebra crossing. An AI-controlled drone destroys a parcel in transit by dropping it from too great a height. An AI system handling a company’s debt collection misidentifies a debtor and denies them access to services. An AI system for generating personalised medicines advises us to take a medicine that then causes harm. There may be many similar examples.  AILD regulates liability in precisely these types of situations.

However, the Directive does not regulate contractual liability. This means that, for example, if an organisation buys an AI system from an IT vendor and that system fails, then (as a general rule) such organisation has nothing to look for in the AILD and rather must seek redress via a well-written agreement, prepared by a lawyer who understands AI matters.

Presumption of causality at the core of AILD

Fundamental to the AILD is its Article 4, in accordance with which (subject, of course, to a number of specific prerequisites), if an injured person brings a compensation action to court for harm caused by AI, courts should presume the causal link between the fault of the defendant using AI and the AI system’s output or failure to produce an output which gave rise to damage. So, to make this simpler, it is the duty of the entity using AI to show that it should not be held liable for the harm caused by its AI, and not the other way around (because it is too challenging or too expensive for the injured person to do so).

AILD alleviates the burden of proof for victims

Courts hearing cases for compensation for damage caused by AI will be allowed to order the defendant to disclose relevant evidence even if the injured person (the claimant) did not request disclosure or was not aware of its existence at all.

AILD’s overarching goal is to make it as easy as possible for ‘ordinary people’ affected by malfunctioning AI used by businesses, including large corporations, to seek compensation. It is up to the beneficiaries of AI to show that it was not the errors in their solutions that caused the damage.

Remarkably, the AILD introduces regulations directly in reference to the AI Act and is based on the same grid of concepts. It differentiates liability issues according to the type of risk of system application that we are dealing with (high-risk vs. non-high-risk AI systems).

Thus, in relation to non-high-risk AI systems, the presumption of causality only applies if the court considers that it is excessively difficult for the claimant to prove a causal link. However, for high-risk AI systems five requirements are laid down and it is only where any one of them is not complied with that the presumption of causality may be deemed to have been met.

What next

The Commission’s proposals now need to be adopted by the European Parliament and the Council. The publication of the Commission’s draft legislation will open discussions at EU and national levels, which should lead to the best possible alignment of legislative solutions with ‘life’.

 

Any questions? Contact the authors

Piotr Kaniewski

Paulina Perkowska

 

Latest Knowledge

Controversial amendment to the Commercial Companies Code

On 13 October 2022, the Act amending the Commercial Companies Code and certain other acts (the “Act”) entered into force. The following is a summary of the most significant changes and the effects they may have, including those which may turn out to be problematic for management boards of subsidiaries.

The roots of the current ESG revolution

As a new business and marketing trend, ESG is of growing interest to an increasing number of markets and market participants. More and more businesses are publishing their ESG strategies or striving to comply with ESG-specific assumptions, and with good reason. According to PwC’s 2021 U.S. survey, 83% of consumers indicated a preference for companies to implement ESG best practices, and 86% of employees prefer to work for companies that care about ESG values.

Artificial Intelligence is suing Google – is this a breakthrough in AI consciousness?

Recent weeks have certainly not been the easiest for Google’s management, lawyers and Public Relations department. During work on LaMDA technology (a chatbot supported by Google’s artificial intelligence-based solutions), engineer Blake Lemoine from Google’s Responsible AI department came to the conclusion that artificial intelligence is self-aware.

Cryptocurrencies continue to tempt investors

Although current cryptocurrency prices are not at their peak, especially compared to November 2021 when crypto reached its highest market value levels ever – almost USD 3 trillion – this market is still attractive to investors, evidenced by the number of cryptocurrency millionaires, which currently stands at almost 80,000.

Bill on employment of foreigners – it is to be easier and faster

On 13 September, a bill on the employment of foreigners was published in the list of legislative work. The draft law is currently at the stage of interdepartmental consultations and is expected to enter into force later this year. The new law is essentially based on the existing fundamental principles regarding the employment of foreigners, though it introduces a number of new solutions.

Banks | We are on the verge of a breakthrough in online services

The metaverse is much more than technology. It is a symbiotic environment of new possibilities, experiences and dynamically changing IT systems that follow the needs and ideas of creators and users. At its core, the metaverse is about building a sense of mutual, seamless interpenetration between the real and virtual worlds.

Well-known brands prepare to enter the Metaverse

What do brands such as Maserati, Yves Saint Laurent, Coco Chanel, Master Card and Volkswagen have in common? It is that their owners are filing trademark applications for goods and services to be protected in the Metaverse.