7 keys to the European Artificial Intelligence Act and its impact on data protection
Technology is advancing by leaps and bounds. One of the most exciting – and also controversial – areas is artificial intelligence (AI), which promises to revolutionise how we work, communicate and live. But, like any revolution, it raises big questions: how do we ensure that the personal data it uses is handled ethically and securely? This is where EU Regulation 2024/1689 on Artificial Intelligence (hereafter EU AI Act), which came into force on 1 August 2024 (although it will apply after two years) and its connection to the General Data Protection Regulation (GDPR), comes into play. In this article, we analyse the use of Artificial Intelligence and its impact on data protection.
AI and data protection: an inseparable tandem
AI runs on data. Without data to train algorithms, it simply cannot learn or make decisions. However, when it comes to personal data, we enter a particularly sensitive area. This is where data protection and AI become inseparable.
On the one hand, AI seeks to make the most of data to develop innovative solutions, but on the other hand, the GDPR ensures that such use is ethical, transparent and respectful of the fundamental rights of individuals.
Compliance with both standards is essential for developing responsible AI systems.
Transparency: a key principle of the GDPR and the EU AI Act
Transparency is a key principle in both the GDPR and the EU AI Act. Both state that individuals have the right to understand how their data is used and how it affects decisions concerning them.
The GDPR already requires companies to provide clear and simple information on how they handle personal data. This includes:
- Who is using the data.
- For what purpose.
- What rights do people have.
The EU AI Act extends this concept of transparency to the world of algorithms. It is not enough to explain how the data is used; it is also necessary to justify how the AI that processes it works. This includes:
- Simple and accessible explanations: Companies should detail in an understandable way how and why an AI system has made a decision.
- Pre-user information: If a person interacts with an AI system (such as a chatbot), they should know that they are dealing with a machine and not a human.
Informed and explicit consent: new requirements
Under the GDPR, any processing of personal data needs a legal basis, and informed and explicit consent is one of the most common. However, when AI comes into play, the concept of consent takes on a new dimension due to the complexity and impact of data use in these systems, which is why the EU AI Act adds a layer of control:
- Prohibition of certain practices even with consent: While consent is an important legal basis, the EU AI Act directly prohibits some uses of AI, such as:
- Real-time biometric identification systems in public spaces for mass surveillance purposes.
- Technologies that manipulate behaviour or emotions to influence decisions in a harmful way.
- Greater clarity and detail in high-risk applications. AI applications that process sensitive data (such as biometrics or health data) should obtain even clearer and more detailed consent.
- Impact assessments: anticipating risks
In the world of AI, where systems process large amounts of data and can make automated decisions with significant impacts, planning and prevention are essential. This is where Impact Assessments come into play. Both the GDPR and the EU AI Act require this type of analysis to identify risks and ensure that AI technologies are used safely and ethically.
It is a tool to anticipate and minimise the negative effects of AI on people’s rights, especially in relation to privacy and data protection.
The GDPR introduces Protection Impact Assessments as a mandatory requirement when a technology or system poses a high risk to the rights and freedoms of individuals.
The EU AI Act takes a step forward with specific Impact Assessments for artificial intelligence systems, especially those that are classified as high risk.
Risk categories: increased focus on risk
Both the GDPR and the EU AI Act share an approach based on risk management at the core of their application; both regulate the interaction of technology with fundamental rights, and do so through a thorough analysis of the risks that such technologies may pose.
As a new development, the EU AI Act classifies AI systems into four risk levels, each with their respective obligations and restrictions:
- Unacceptable risk: prohibited because of their impact on fundamental rights, such as social scoring systems or subliminal manipulation.
- High risk: subject to strict regulatory requirements, such as AI used in labour recruitment, education, health or justice.
- Limited risk: They require greater transparency towards the user, such as chatbots or AI systems that interact with people.
- Minimal risk: Not subject to specific obligations, such as video games or image filters.
Citizens’ rights: monitoring and protection
The GDPR already guarantees rights such as access, rectification and erasure (right to be forgotten) of our personal data, the right to portability, the right to object to or limit processing and the right not to be subject to automated decisions. The GDPR reinforces these rights by:
- Requiring human oversight: In high-risk systems, human intervention in critical decisions must be ensured.
- Non-discrimination: The EU AI Act prohibits AI systems from discriminating based on biased training data or unfair configurations, protecting citizens from harmful decisions.
Sanctions: fines in the millions
Both the GDPR and the EU AI Act impose strict requirements on companies that process personal data or develop and implement AI systems. Ignoring these regulations can be very costly, literally: fines for non-compliance can run into the millions, as well as causing irreparable damage to organisations’ reputations.
What does the GDPR provide for in terms of sanctions?
Fines can be up to 4% of the company’s annual global turnover or 20 million euros, whichever is higher.
What fines does the EU AI Act provide for?
- 35 million or, if the offender is a company, up to 7% of its total worldwide turnover for the preceding financial year, whichever is higher.
- Some infringements may carry lower but significant fines of up to 15 million euros or 3% of overall turnover.
Conclusion: a safer and more ethical future
The advent of EU AI Act is a milestone in technology regulation. Together with the GDPR, it lays the foundations for responsible innovation that respects privacy and fundamental rights.
If your company works with AI, now is the time to review your processes, implement risk assessments and ensure compliance with both regulations.
Do you need advice? Go to our data protection area: