With the rapid development of artificial intelligence, there is a need to adapt legal regulations in a way that effectively protects the rights of individuals while supporting technological innovation.
In this context, a key challenge is to harmonise the AI Act with existing data protection regulations such as the GDPR.
The AI Act responds to this challenge.
Firstly, by emphasising the importance of regulations relating to the protection of privacy in a broad sense. Secondly, by explicitly stating that it is not intended to affect existing EU law on the processing of personal data. Nor is the AI Act intended to interfere with the roles and powers of independent supervisory authorities operating in this area.
To the extent that the design, development or use of AI systems involves the processing of personal data, the AI Act also does not affect the data protection obligations, under EU or national law, of providers and users of AI systems acting as data controllers or data processors.
The AI Act states that data subjects retain all the rights and guarantees granted to them under Union law (i.e. the GDPR, among others), including those related to automated decision-making in individual cases, including profiling.
The rules set out in the AI Act for the marketing, commissioning and use of AI systems should facilitate the effective implementation of such systems and enable data subjects to benefit from the guaranteed rights and other remedies available in the EU.
The Polish approach
The President of the Office for the Protection of Personal Data (UODO) also emphasises that ensuring compatibility between these regulations is one of the most important tasks of the Polish legislator.
Indeed, the AI Act aims to regulate the use of artificial intelligence in a way that minimises risks to privacy and data security. At the same time, it should promote the development of modern technology.
What does this mean in practice?
Legislation tandem or GDPR + AI ACT
Firstly, the AI Act and the GDPR should be considered as independent, equal pieces of legislation. For this reason, they are often referred to as ‘tandem legislation’.
Given that personal data is often a key part of the functionality of AI-based technologies, GDPR compliance is essential to ensure their legitimacy.
In practice, this means that companies developing AI systems need to consider data protection from the technology design stage (privacy by design) and apply an approach based on data minimisation and other data protection rules under the GDPR.
How can this be done?
Experience to date suggests that, although the two issues overlap, an independent (albeit overlapping) approach is likely to be required from a regulatory perspective.
Developing AI systems will therefore need to meet the requirements of two independent checklists:
- The first, based on the existing provisions of the GDPR
- The second, adapted to the requirements of the AI Act
An alternative solution could, of course, be a common checklist, although our experience suggests that there may be some difficulties in this regard.
The GDPR and the AI Act – how to reconcile the differences
Speaking of the separation of the regulations, it’s worth noting that the independence of the two regimes is reflected in the diversity of their approaches.
The GDPR sets out fairly detailed requirements for the processing of personal data, imposing obligations on controllers and processors in relation to, among other things:
- Transparency
- Data minimisation
- Purpose limitation and data subjects’ rights
Most importantly, it grants data subjects a number of rights.
While the AI Act is based on similar values such as the protection of human rights and the prevention of discrimination, it is more focused on the risks associated with AI technology, whether these risks are related to personal data or other factors.
An obvious element of this independence is that the AI Act can also apply to technologies that do not process personal data and are therefore not subject to the GDPR, such as AI systems used in the industrial sector to optimise production processes.
This shows that the two pieces of legislation, while complementary, have de facto separate purposes and areas of application.
A risk-based approach, what it means for the GDPR and the AI Act
The differences between the GDPR and the AI Act are many.
Although it is very often said that both regulations take a risk-based approach, their practical application differs significantly. Both in terms of risk assessment procedures and risk classification.
At the heart of the GDPR are the rights of the individual whose data is being processed, and the associated requirements are aimed at eliminating the risks associated with such processing.
The AI Act takes an approach based on the obligations it imposes on providers and entities using AI systems. In this context, it is more of a prohibitory regulation, focused on compliance management and compliance more broadly.
Thus, while the GDPR focuses on protecting individuals from data processing risks, the AI Act addresses a wide range of risks associated with the use of artificial intelligence in different social and economic contexts.
Data – input and output
The issue of the data itself is also an interesting one.
In the GDPR, the main focus is on data, collected by the controller for a strictly defined purpose (and that purpose is a very important concept). Input data is therefore key.
In the AI Act, input data is also very important, but in a broader sense than in the GDPR, i.e., for example, in the context of training. In contrast, the emphasis is on the output, which we will not find in the GDPR, with a few exceptions. In fact, output is relevant from the point of view of risks related to potential discrimination, in the context of copyright, etc.
The biggest challenge that will arise at the interface between the two regulations will undoubtedly be artificial intelligence systems used in medicine, employment, biometrics, etc., and generally those that are considered high-risk systems. This is because there, the personal data aspect will be crucial and in addition to meeting a number of requirements of the AI Act, deployers will have to ensure that data is processed in accordance with the GDPR in order to effectively protect privacy and prevent misuse.
Thus, the future of the practical application of artificial intelligence regulations is certainly inextricably linked to the protection of personal data.
The harmonisation of regulations and de facto their enforcement is key to ensuring that AI is developed responsibly, transparently and in accordance with the rights of individuals.
This challenge will have to be met both by companies implementing AI-based solutions, by developers in the programming phase, by lawyers giving opinions on the solutions to be implemented and, finally, by supervisory authorities which, depending on the approach adopted in a given country, will have to tackle AI and data protection under a single remit or work harmoniously in two independent regimes.
Any questions? Contact us