New liability rules for artificial intelligence in the European Union

2 November 2022 | Knowledge, News

There is a lot going on in the EU about Artificial Intelligence

Artificial Intelligence (AI) is at the heart of the EU’s strategy for creating a digital single market. In this context, a number of EU legal documents have been emerging for several years, such as the White Paper on Artificial Intelligence of February 2020 or the European Parliament’s resolutions on ethical framework, civil liability and intellectual property rights for AI of October 2020.

In April 2021, the European Commission presented a revolutionary proposal for a regulation on AI (Artificial Intelligence Act), laying the foundations for a legal framework for the use of AI within the European Union. Legislative work on the AI Act is already at an advanced stage and the document is expected to come into force at any moment.

AI system output and civil liability

For some time, the Union has also been working on the issue of regulating civil liability in the context of AI. A few years ago, the European Parliament drafted a proposal for a regulation on this issue, but the draft did not ‘take hold’.

Regulations apply directly in every Member State, in the exact same way, whereas civil liability regimes vary greatly from one EU country to another. The proposed wording of the regulation was unfortunately completely incompatible with several of these regimes (including Poland’s). A much better way for the EU to regulate the issue of liability is via a directive (which sets out certain standards and mechanisms, which must then be implemented by each Member State in a manner appropriate to its own law). This is precisely the mechanism that was resorted to this time.

On 28 September 2022, the European Commission adopted two proposals leading to the regulation of AI liability. One concerns the modernisation of existing rules on the strict liability of manufacturers for defective products, whereas the other proposes a new, separate directive on AI liability.

Artificial Intelligence Liability Directive

By its very title, the Artificial Intelligence Liability Directive (AILD) indicates that it concerns non-contractual liability.

In legal-speak, AILD primarily regulates tort liability or, to put it even more simply, liability for damage arising from random events or incidents between entities not bound by a contract. This is necessary in a situation where we are indeed surrounded by AI.

So what torts can AI commit against us? For example, an autonomously driven car hits a pedestrian in a zebra crossing. An AI-controlled drone destroys a parcel in transit by dropping it from too great a height. An AI system handling a company’s debt collection misidentifies a debtor and denies them access to services. An AI system for generating personalised medicines advises us to take a medicine that then causes harm. There may be many similar examples.  AILD regulates liability in precisely these types of situations.

However, the Directive does not regulate contractual liability. This means that, for example, if an organisation buys an AI system from an IT vendor and that system fails, then (as a general rule) such organisation has nothing to look for in the AILD and rather must seek redress via a well-written agreement, prepared by a lawyer who understands AI matters.

Presumption of causality at the core of AILD

Fundamental to the AILD is its Article 4, in accordance with which (subject, of course, to a number of specific prerequisites), if an injured person brings a compensation action to court for harm caused by AI, courts should presume the causal link between the fault of the defendant using AI and the AI system’s output or failure to produce an output which gave rise to damage. So, to make this simpler, it is the duty of the entity using AI to show that it should not be held liable for the harm caused by its AI, and not the other way around (because it is too challenging or too expensive for the injured person to do so).

AILD alleviates the burden of proof for victims

Courts hearing cases for compensation for damage caused by AI will be allowed to order the defendant to disclose relevant evidence even if the injured person (the claimant) did not request disclosure or was not aware of its existence at all.

AILD’s overarching goal is to make it as easy as possible for ‘ordinary people’ affected by malfunctioning AI used by businesses, including large corporations, to seek compensation. It is up to the beneficiaries of AI to show that it was not the errors in their solutions that caused the damage.

Remarkably, the AILD introduces regulations directly in reference to the AI Act and is based on the same grid of concepts. It differentiates liability issues according to the type of risk of system application that we are dealing with (high-risk vs. non-high-risk AI systems).

Thus, in relation to non-high-risk AI systems, the presumption of causality only applies if the court considers that it is excessively difficult for the claimant to prove a causal link. However, for high-risk AI systems five requirements are laid down and it is only where any one of them is not complied with that the presumption of causality may be deemed to have been met.

What next

The Commission’s proposals now need to be adopted by the European Parliament and the Council. The publication of the Commission’s draft legislation will open discussions at EU and national levels, which should lead to the best possible alignment of legislative solutions with ‘life’.

 

Any questions? Contact the authors

Piotr Kaniewski

Paulina Perkowska

 

Latest Knowledge

Banking in 2026: technology, regulation and the new market landscape

The year 2026 will see the banking sector undergo its most dynamic transformation in a decade. The trends identified in Accenture’s Top Banking Trends FY26 report suggest that the sector is entering a phase in which technology and regulation will be inseparable, driving all aspects of change. However, it is regulation that determines the boundaries, pace and manner of implementation for new solutions. We take a look at what else the experts are focusing on.

The new National Cybersecurity System

The amendment to the Act on the National Cybersecurity System (UKSC) is one of the most significant regulatory reforms in recent years. Its main objective is to align Polish law with Directive (EU) 2022/2555 of the European Parliament and of the Council. The directive, also known as NIS2, substantially raises digital security requirements across the Union. The Polish Act on the National Cybersecurity System has undergone a thorough overhaul, covering more organisations (with estimates suggesting nearly 40,000 entities), introducing more demanding obligations, statutory personal liability for management board members, and even more stringent rules for imposing financial penalties. In the case of the most serious violations, these penalties can reach 100 million PLN.

‘Made in Europe’ is no longer just a slogan. It is becoming law

Until recently, ‘Made in Europe’ was just a label. While it was useful for marketing purposes, it lacked any hard, normative content. This may soon change. On 4 March, the European Commission published a proposal for the Industrial Accelerator Act, stipulating that, from 2027 onwards, the Union origin of components will be a prerequisite for participating in renewable energy auctions, accessing public funding, and for being eligible to participate in public procurement procedures. The slogan ‘Buy European’ could become a concrete instrument for supporting local production and controlling foreign investment.

Non-obvious cases of transferring an establishment to a new employer

The transfer of all or part of an establishment (zakład pracy) is a special concept in labour law relating to changes in ownership. Put simply, it is the automatic transfer of all the rights and obligations of the employer from one entity to another, without the need for any additional actions or consents from the parties involved. However, this must be preceded by the fulfilment of a range of informing obligations by both the new and former employers. Let’s take a look at what the process should involve.

Protecting yourself against tax risks in the deposit-return system

The deposit-return system has been in place since October 2025, raising significant tax concerns from the outset. Although the regulations came into force, it was unclear for a long time how to apply them in practice. Some of the regulations needed clarification, some solutions were missing and the published explanations did not cover all the key issues. Consequently, the market began to develop its own operating standards.

Banking sector overview | Banking today and tomorrow | March 2026

On 12 February 2026, the Court of Justice of the European Union (CJEU) issued a judgment concerning the use of the WIBOR index in loan agreements. The CJEU judges confirmed that, in consumer cases, courts cannot examine the correctness of the WIBOR calculation. The banks had correctly informed their clients about the reference rate in accordance with national and EU law.

The issue of the National Labour Inspectorate reform has resurfaced

A new draft law proposing changes to the way the National Labour Inspectorate operates has been submitted to the Sejm. During its first reading on 25 February, the draft was not rejected and was therefore referred to the Social Policy and Family Committee for further consideration. Despite the concerns and controversies raised so far, including by businesses, the legislature continues to pursue the thorough modernisation of Poland’s employment model, which involves increased supervision of the labour market and curbing the abuse of civil law contracts. In this article, we will take a look at the proposals included in the new draft and explain what they mean for businesses.