The AI Act in practice: challenges and opportunities for the development of artificial intelligence in the EU

26 July 2024 | Knowledge, News, The Right Focus

On 12 July 2024, just over three years after work began, the AI Act, i.e. the Regulation laying down harmonised rules on artificial intelligence, was published in the Official Journal of the European Union. The Regulation will enter into force 20 days after this date and will be fully applicable from 2 August 2026. This means that providers and users of artificial intelligence will soon face a number of new obligations.

We look at how to prepare for this effectively.

How to prepare your business for the AI Act

Before embarking on large-scale implementations, every company should consider what systems it uses, if any, and what its role is in relation to them.

This is because the extent of your responsibilities will depend on the type of system and on whether you are the system’s provider, just a user, or perhaps you use it with appropriate modifications.

EN Broszura AI 3

Classification of systems

The AI Act classifies AI systems according to their level of risk:

  • Solutions deemed to pose unacceptable risk, such as those using subliminal techniques or social scoring based on behaviour or personal characteristics, are prohibited.
  • High-risk systems, such as those using biometric data or used for employee recruitment, will be allowed after meeting additional requirements, including, without limitation:
    • Monitoring system performance
    • Ensuring that input data is relevant and representative
    • Compliance with registration obligations
  • Limited-risk systems, such as chatbots or technologies that manipulate audiovisual content – it will be necessary to inform users that they are dealing with artificial intelligence systems
  • Minimal-risk systems, such as spam filters, will be free to use, although, as with all AI systems, providers and deployers should take measures to ensure that their staff and others responsible for the systems have a sufficient level of AI literacy

In addition, the AI Act also singles out general-purpose AI models, such as tools like ChatGPT.

Defining your role

In order to adequately prepare for the AI Act, the first step is to map and identify the processes. This will enable you to determine whether you are dealing with an AI system and to verify its technical standards.

You will then need to classify the system according to the risk categories mentioned above and define your role, i.e. whether you are a supplier or a deployer, whether you are modifying the system, and to identify your specific responsibilities.

The next step will be to develop appropriate procedures and documentation, including:

  • Policy on the use of AI systems
  • Technical documentation on the technologies used
  • Risk management mechanisms
  • Procedures for dealing with customers or recipients of the system

Preparation for action

As part of your operational preparation to meet your obligations under the AI Act, it is advisable to:

  • Carry out an AIRA (AI Risk Assessment) process and designate a structure responsible for managing AI, monitoring risks and ensuring compliance, as well as implementing appropriate internal policies
  • Assess the AI systems in use and analyse the associated risks (e.g. discrimination, data breaches) and compliance gaps
  • Ensure that appropriate cybersecurity standards are in place
  • Protect the organisation against potential incidents, including developing appropriate patterns for preventing, responding to and reporting incidents to the relevant authorities
  • Establish good practices for example in terms of staff preparation or customer information standards.
  • If using technology provided by an external provider – also assess the provider using the AI Vendor risk assessment matrix or other methodology
  • In addition, in the case of high-risk system providers, it is also important to consider:
  • Ensuring that the system meets the requirements of the AI Act
  • Implementing a quality management system
  • Properly labelling of the AI system
  • Conducting conformity assessment and preparing a statement of conformity
  • Fulfilling registration obligations and obligations towards supervisory authorities

Reporting obligations

The AI Act requires providers of high-risk artificial intelligence systems to report serious incidents.

Serious incidents are those that directly or indirectly lead, could have led or are likely to lead to the death of a person or serious harm to a person’s health, harm to property or the environment, or serious and irreversible disruption of the management and operation of critical infrastructure.

Incident prevention and response mechanisms should therefore be developed.

In addition, it is important to remember that compliance with the obligations under the AI Act will often overlap with the requirements of other regulations, such as the GDPR, DORA, DMA, DSA, or regulations on copyright protection, among others.

The AI Act also provides for the establishment of the AI Office to supervise certain systems, support the development of certain standards and enforce rules set at EU level.

In addition, each Member State should establish its own competent authority for AI matters or delegate such powers to an existing body. In Poland, this role will be fulfilled by the newly established Commission for Artificial Intelligence Supervision, according to the Ministry of Digital Affairs.

The AI Act – a summary

In summary, by imposing obligations on providers and users of AI-based solutions, the AI Act will affect not so much BigTechs as all businesses using AI.

It is predicted that within the next two years, almost 80 per cent of businesses will be using AI-based systems and will therefore fall under the AI Act to some extent.

This will require the implementation of appropriate policies, procedures and comprehensive AI Governance, as well as securing the aspect of using AI-based solutions provided by third parties.

It is therefore advisable to make the appropriate organisational and technical preparations now and to ensure compliance with the new regulations.

Any questions? Contact us

Latest Knowledge

What EU businesses need to know about foreign subsidies

Just two months after the Regulation came into force, the Commission launched a high-profile investigation into a contract awarded by the Bulgarian Ministry of Transport and Communications for the purchase of electric trains from a major Chinese manufacturer. This was intended to emphasise the EU’s stance on unfair competition and its determination to combat this phenomenon.

Labour law: what lies ahead in 2026?

Changes to the way the length of service is determined, new executive ordinances for foreigners, and new powers for the National Labour Inspectorate are just some of the changes in labour law that will come into force in 2026.

Protecting designs exhibited at trade fairs

How can intellectual property and designs that have already been presented to the public, for example at trade fairs, be protected? All you need to do is exercise your exhibition priority right. This mechanism allows you to file an application for such a design at a later date without affecting its novelty. Let’s see how it works in practice.

Contractual practices prohibited under the Data Act 

One of the key aspects of the Data Act is the introduction of provisions on prohibited contractual practices. These provisions are intended to protect businesses operating within the broadly understood digital industry that have a weaker contractual position.

Those who have data have power. The Data Act redistributes this power

The EU Data Act, which came into force in September 2025, represents a breakthrough in the regulation of data access and use. Data generated by devices, ranging from agricultural tractors and industrial machinery to solar panels and transport fleets, is no longer the sole property of manufacturers. Other market participants now have the opportunity to access and use this data to develop new, innovative products and services. The Data Act marks a departure from business models based on data monopolisation, to one requiring data to be shared in accordance with its rules. We are therefore entering a completely new reality.

KSeF and transfer pricing: a new era of transparency and operational challenges

The introduction of the National e-Invoice System (KSeF) represents one of the most significant challenges for group companies in recent years. Although the KSeF is intended to simplify the invoicing process and reduce tax abuse, it also has a significant impact on transfer pricing, particularly with regard to the documentation and settlement of TP adjustments.

Contributing assets to a family foundation – what to keep in mind

A family foundation is a legal entity whose purpose is to manage wealth effectively and ensure its succession without the risk of dispersing assets accumulated over generations. Therefore, a key issue related to the activities of such an organisation is the contribution of this wealth to the foundation in the form of various types of assets that will work for the beneficiaries. Let’s take a look at what this process involves in practice.

Cloud migration after the Data Act: new rights, lower costs and greater freedom

The Data Act requires a significant change in approach to cloud services. Companies should review their contracts and start planning updates immediately. It is crucial to introduce appropriate switching provisions and remove or renegotiate exit fees. Companies must also prepare their infrastructure, both technically and organisationally, for interoperability and migration in accordance with the new regulations.

Contact us:

Natalia Kotłowska-Wochna

Natalia Kotłowska-Wochna

Attorney-at-Law / Head of New Tech M&A / NewTech Practice Group / Head of the Poznan Office

+48 606 689 185

n.kotlowska@kochanski.pl