Data Bytes 65: Your UK and European Data Privacy update for March 2026
Welcome back to your spring edition of Data Bytes.
This month our Spotlight section is provided by our consumer and competition experts Chris Eberhardt and Isabella Hunt. Chris and Isabella report on the CMA’s policy paper and accompanying guidance on Agentic AI and consumer law which has a clear message: if your business deploys an AI agent in a consumer-facing role, you are responsible for whatever that agent does: Consumer protection law does not carve out exceptions for automation, and the CMA expects fair treatment principles to be baked into system design from the outset, not bolted on afterwards.
We also see children's privacy enforcement continuing to intensify. The ICO's £14.5 million penalty against Reddit, combined with a pointed open letter to social media platforms and a joint statement with Ofcom, signals that self-declaration as an age-assurance mechanism is no longer credible. If your platform could be accessed by children, now is the time to revisit your age-assurance measures and supporting DPIAs.
Across the Channel, highlights include a landmark CJEU ruling confirming that even a first data subject access request can be refused as excessive where the request is abusive, and the EU AI Office's second draft Code of Practice on transparency of AI-generated content.
Turning to cyber, last week's announcement of Claude Mythos Preview has crystallised the risks that sit at the intersection of AI and cybersecurity for many organisations. Our latest article available here unpacks what Mythos means in practice and outlines the key questions and actions for boards, legal and risk teams. Demonstrating adequate assessment, decision making and governance will be important. In summary, we see five immediate focus areas:
Top five actions for boards, legal and risk teams
Our cyber team is on hand to discuss these developments with you, including how we can assist in stress testing readiness and addressing any of the actions above.
Plenty to keep in-house teams busy, get your Data Bytes here.
This month, our spotlight remains on agentic AI and, in particular, the CMA's policy paper on "agentic AI and consumers" (the CMA Policy Paper), published on 9 March 2026 alongside guidance for businesses on using AI agents in compliance with consumer law (the CMA Guidance). The CMA Policy Paper offers a forward looking analysis of how agentic AI may affect consumer behaviour in digital markets (including the risks and challenges associated with agentic use), and how existing consumer protection law applies in this new context. The CMA Guidance also sets out several key considerations for businesses using (or considering) AI agents in consumer-facing interactions.
From these, three key themes stand out:
Consumer law applies in full, regardless of automation: Consumer protection law requires businesses to treat customers fairly. This is irrespective of whether a consumer interacts with a person or an AI agent. The CMA Guidance is clear: businesses remain responsible for what an AI agent does in the same way they are responsible for what an employee does. This applies even if a third party designed or provided the AI agent for the business to use. The CMA has indicated that it is likely to be unlawful under consumer protection law if an AI agent steers, pressures or misleads consumers in a way that harms a consumer's economic interests.
Agentic AI use comes with risks, challenges (and benefits): The CMA Policy Paper highlights specific risks that are heightened by agentic AI use, including:
The CMA Policy Paper considers the example of an AI agent which monitors prices, negotiates offers, switches providers, and executes purchases on a consumer's behalf. The case study sets out both the potential benefits (reduced cognitive load, time savings, accessible optimisation) and the consumer law risks that businesses using agents might face in these circumstances (lack of transparency about limitations of the agent, erroneous switching decisions, hidden incentives). Safeguards identified by the CMA to mitigate these risks include:
Design choices matter: A consistent theme in the CMA Policy Paper is that risk is often driven by design. How an AI agent is trained (including how it adapts its behaviour to respond to a user), what information it presents (or withholds) to consumers, and how much control a consumer retains when using an agent are all relevant to compliance.
The CMA has made clear that businesses should embed consumer protection principles into the design of any agentic AI system to ensure it can build trust, scale responsibility and so a business can trust the reliability of outcomes delivered to customers. The CMA cross-refers to its previous work on online choice architecture signalling that constant personalisation could mean steering is not as visible to consumers heightening the risk dark patterns are deployed at scale.
The CMA's work aligns with the UK Government’s broader AI Opportunities Action Plan, led by DSIT (which is referenced in the CMA Policy Paper). That plan emphasises responsible investment in the foundations of AI, pushes for cross-economy AI adoption, and seeks for the UK to be a "national champion at the frontier of AI capabilities" as a core pillar enabling economic growth. Taken as a whole, the message is clear: the UK wants businesses to innovate with AI, but not at the expense of consumer protection or market integrity. Agentic AI is viewed as an opportunity however, business must ensure existing consumer protection are taken seriously throughout each of the design, training, deployment, use and monitoring stages.
Given the rapid expansion in both the capabilities and adoption of AI and agentic systems, the CMA Policy Paper and CMA Guidance are helpful contributions to understanding how the UK's consumer protection regime may apply to address the challenges and risks created by the rapid technological development in this field.
For businesses, the takeaway is not to pause innovation, but rather to ensure consumer law considerations are embedded into their tools and agents from the start, such that they can be trusted (both by the business and consumers) to deliver fair and transparent outcomes. We expect further CMA engagement in this space as agentic systems become more prevalent, particularly as the CMA begins to exercise its new direct consumer enforcement powers.
On 27 March 2026, the ICO and FCA published a joint statement on existing regulatory expectations regarding financial firms' approaches to vulnerability related data (the Statement). It intends to help firms understand and apply the FCA's expectations for delivering good outcomes for retail consumers in vulnerable circumstances under the Consumer Duty framework in a way that is in alignment with the ICO's expectations on the lawful, fair and responsible use of personal information.
The Statement sets out the FCA and ICO expectations and brief illustrative examples around data processing in terms of:
Notably, the Statement also specifically calls out the substantial public interest condition under Article 9 UK GDPR as a potentially relevant lawful basis for firms processing special category personal data to support consumers in vulnerable circumstances (and that when relying on the substantial public interest condition, the related conditions under the Data Protection Act 2018 of safeguarding children and individuals at risk and safeguarding of economic well-being of certain individuals may be relevant).
The FCA and ICO acknowledge that they will continue to work together, including through the Digital Regulation Cooperation Forum (DRCF) and the UK Regulators Network (UKRN), to help ensure that regulatory expectations are clear. The ICO also welcomes continued engagement with stakeholders to help identify whether further clarity on data protection requirements may be useful.
As we have previously reported in Data Bytes (for example here), children's privacy is a key area of focus for the ICO. Over the past month, the ICO has made a strong statement about what constitutes appropriate and sufficient age assurance mechanisms for online services.
Together, these developments send a clear message that the ICO expects robust, technology-driven age-assurance mechanisms to be implemented by organisations. Organisations which do, or may, process children's personal data should review their platforms' age-assurance measures, DPIAs which consider the risks of processing children's data, and lawful bases for processing children's data.
The ICO has released a flurry of new guidance and consultations over the past few weeks. Much of this stems from changes which are being brought in under the Data Use and Access Act 2025 (DUAA).
On 17 March 2026, the ICO published a blog post setting out its findings from audits of police forces' use of facial recognition technology, emphasising the need for strong data protection governance, bias testing and staff training, and advocating that any new legal framework for biometrics proposed by the Home Office must build on, rather than replace, existing data protection law. See more details here.
On 3 March 2026, the European Commission ("EU Commission") published its draft guidance for the Cyber Resilience Act ("CRA Draft Guidance") to assist economic operators, in particular SMEs.
The CRA Draft Guidance provides guidance for manufacturers and other economic operators placing regulated products on the EU market, which includes standalone software, hardware with embedded software, remote data processing solutions; it further provides guidance on the CRA definitions, free and open-source software and dealing with substantial modifications, as well as related support periods.
The CRA Draft Guidance deals with incident notifications to the CSIRT and ENISA which must be as soon as a manufacturer "becomes aware that a threat actor has actively exploited vulnerabilities or severe incidents. According to the CRA Draft Guidance, a manufacturer is "aware" once it has conducted an initial assessment and is reasonably certain that a vulnerability is actively exploited, or a severe incident has occurred. The CRA Draft Guidance stresses that a manufacturer must carry out the initial assessment promptly after receiving initial indications of a vulnerability exploit.
Economic operators placing products with digital elements on the EU market may use the CRA Draft Guidance to assess their compliance obligations and prepare for the phased CRA application. According to the timelines set out in the CRA, economic operators need to be ready for reporting vulnerability exploits from 11 September 2026 onward (Art. 14 CRA), whereas the CRA will apply in full from 11 December 2027.
On 5 March 2026, the AI Office has published its second draft of the Code of Practice on Transparency of AI-Generated Content under the AI Act ("Code of Practice on AI"). The Code of Practice on AI aims to facilitate compliance with the transparency obligations for providers and deployers of AI systems.
The Code of Practice on AI introduces a multi-layered marking approach for AI-generated content, requiring providers to implement at least two layers of machine-readable active marking, including digitally signed metadata and invisible watermarks embedded in the content. Further, providers shall make available detection mechanisms that allow users, deployers and third parties to verify whether certain content was generated or manipulated by their AI system.
For deployers of AI systems, the Code of Practice on AI prescribes clear labelling requirements for AI-generated content, deep fakes and text publications. Deployers shall use a uniform EU icon containing the capitalised acronym "AI", supplemented where appropriate with text such as "Generated with AI", "Made by AI" or "Manipulated with AI".
The Code of Practice on AI may serve as a voluntary guiding document for demonstrating compliance, whereas it does not constitute conclusive evidence of complying with the AI Act obligations. Providers and deployers of generative AI systems should assess their marking and labelling processes against the requirements under the Code of Practice on AI.
On 19 March 2026, the CJEU has ruled that a controller can reject a first data subject access request (DSAR) as "excessive" under Art. 12 para. 5 GDPR, if the data subject acts in an abusive manner with the intention to use the DSAR to claim damages. A data subject based in Austria had signed up for the newsletter of a German controller and, 13 days later, submitted a DSAR. The controller rejected that first and only request as abusive. After the data subject pursued the request and additionally claimed EUR 1,000 in non-material damages, the controller filed a declaratory action in court arguing that the individual had no right to damages; the data subject submitted a counterclaim seeking EUR 1,000 in non-material damages for the refusal to grant access to personal data. The district court requested a preliminary ruling of the CJEU.
The controller must prove two cumulative conditions to reject the first DSAR as excessive. First, the access request must go beyond the actual purpose of the right of access – which is to verify whether and how the controller is processing personal data. Second, the controller must prove that the data subject submitted the request not to check lawfulness, but to artificially create the conditions for a claim for damages. Relevant indicators include whether the data subject voluntarily provided the data, the timing of the request, the data subject's overall conduct, and publicly available information showing a pattern of systematic requests and claims for damages against multiple controllers – provided other circumstances confirm abusive intent.
The ruling rectifies the previous interpretation of Art. 12 para. 5 GDPR that only repeated or serial requests could allow a controller to reject a DSAR as "excessive".
On 23 February 2026, the EDPB and 60 other data protection authorities have published a joint statement setting out expectations for organisations developing and deploying AI image and video generation systems to avoid harm to individuals and protect their data protection rights.
The EDPB Support Pool of Experts has published a study on the data broker market in Belgium, identifying eight categories of data brokers and providers and offering a transferable methodology for other EU supervisory authorities to identify data brokers and conduct similar investigations in their own jurisdictions.
On 3 March 2026, TikTok has commenced proceedings before the Irish High Court challenging the Irish Data Protection Commission's (DPC) EUR 530 million fine and order to suspend data transfers of European users' personal data to China.
On 10 March 2026, the EDPB and EDPS have published a joint opinion on the proposed European Biotech Act, welcoming its objective and recommending a series of targeted amendments.
On 11 March 2026, the German Federal Government submitted its draft AI Market Surveillance and Innovation Promotion Act (KI-MIG) to implement the EU AI Act, designating the Federal Network Agency (Bundesnetzagentur) as the principal market surveillance authority and central contact point, establishing at least one regulatory sandbox, and introducing administrative fines of up to EUR 50,000 for breaches of the AI Act that are not covered by AI Act fines.
On 6 March 2026, the registration deadline has elapsed for "essential" or "important" entities domiciled in Germany to register with the Federal Office for Information Security (BSI) under the NIS2 Directive.
On 10 March 2026, the German Data Protection Conference (DSK) has published its proposal [in German only] for various amendments to the GDPR, including greater accountability for manufacturers and providers of AI systems, stronger protection for minors, and safeguards for data subjects' rights when controllers deploy AI.
On 13 February 2026, the Conseil d'Etat confirmed the position of the CNIL, aligned with the case law of the CJEU, that pseudonymised data remains personal data, whereas truly anonymised data falls outside the scope of the GDPR. See the full case here.
On 23 December 2025, the Conseil d'Etat partially overturned the CNIL's decision, which had imposed a €32 million fine on Amazon on the grounds that its employee monitoring system was excessively intrusive and could not be justified by the company's legitimate interests, particularly as it enabled detailed tracking of workers' performance. The Conseil d'Etat held that some of the monitoring tools served legitimate organisational and operational purposes rather than disciplinary ones, thereby adjusting the legal assessment of the processing. The court upheld the sanction in part (notably relating to data minimisation) but reduced the fine to €15 million. This ruling confirms that certain employee monitoring processes may be lawful if clearly justified by operational needs, but companies should carefully document purposes to avoid regulatory sanctions. See the full case here.
On 16 February 2026, the CNIL and the High Authority for Health (HAS), as part of their new partnership, launched a public consultation on draft guidance designed to assist healthcare establishments and professionals in the appropriate use of artificial intelligence systems deployed in healthcare settings. Healthcare sector stakeholders and AI system providers are encouraged to review the draft and submit their contributions before the 16 April 2026 deadline, as it will shape future compliance expectations in a highly regulated sector. See more details here.
On 12 March 2026, the Italian Data protection Authority (IDPA) fined Intesa Sanpaolo €17.628 million after finding that its profiling of approximately 2.4 million customers for transfer to Isybank lacked a valid legal basis and was not supported by sufficiently transparent notices. The Italian Competition Authority had already investigated the case, which was closed following commitments adopted by the bank. See the IDPA press release here and the full decision here.
On 12 March 2026, the European Commission and the European Data Protection Board published the individual submissions received in response to the public consultation on the draft joint guidelines on the interplay between the Digital Markets Act and the GDPR. See the Commission update here.
The information provided is not intended to be a comprehensive review of all developments in the law and practice, or to cover all aspects of those referred to.
Readers should take legal advice before applying it to specific issues or transactions.