Legal development

Data Bytes 65: Your UK and European Data Privacy update for March 2026

graph and lights background

    Welcome back to your spring edition of Data Bytes.

    This month our Spotlight section is provided by our consumer and competition experts Chris Eberhardt and Isabella Hunt. Chris and Isabella report on the CMA’s policy paper and accompanying guidance on Agentic AI and consumer law which has a clear message: if your business deploys an AI agent in a consumer-facing role, you are responsible for whatever that agent does: Consumer protection law does not carve out exceptions for automation, and the CMA expects fair treatment principles to be baked into system design from the outset, not bolted on afterwards.

    We also see children's privacy enforcement continuing to intensify. The ICO's £14.5 million penalty against Reddit, combined with a pointed open letter to social media platforms and a joint statement with Ofcom, signals that self-declaration as an age-assurance mechanism is no longer credible. If your platform could be accessed by children, now is the time to revisit your age-assurance measures and supporting DPIAs.

    Across the Channel, highlights include a landmark CJEU ruling confirming that even a first data subject access request can be refused as excessive where the request is abusive, and the EU AI Office's second draft Code of Practice on transparency of AI-generated content.

    Turning to cyber, last week's announcement of Claude Mythos Preview has crystallised the risks that sit at the intersection of AI and cybersecurity for many organisations. Our latest article available here unpacks what Mythos means in practice and outlines the key questions and actions for boards, legal and risk teams. Demonstrating adequate assessment, decision making and governance will be important. In summary, we see five immediate focus areas:

    Top five actions for boards, legal and risk teams

    1. Review vulnerability risk-assessment programs to ensure they remain fit for purpose given this material change in the threat landscape.
    2. Strengthen board reporting to focus on the most decision-useful metrics, including where risk tolerance remains, the effectiveness of compensating controls, and progress in closing known gaps.
    3. Urgently reassess third- and fourth-party risk – many suppliers may be less prepared than you.
    4. Assume compromise: conduct a formal maturity assessment of whole-of-organisation readiness to respond to a high-impact cyber incident; and uplift response capability and plans in line with risk appetite.
    5. Connect legal, risk, compliance and security teams to ensure effective coordination, consistent application of risk appetite, and clear accountability.

    Our cyber team is on hand to discuss these developments with you, including how we can assist in stress testing readiness and addressing any of the actions above.

    Plenty to keep in-house teams busy, get your Data Bytes here.

    Spotlight on Agentic AI and consumer law in the UK

    This month, our spotlight remains on agentic AI and, in particular, the CMA's policy paper on "agentic AI and consumers" (the CMA Policy Paper), published on 9 March 2026 alongside guidance for businesses on using AI agents in compliance with consumer law (the CMA Guidance). The CMA Policy Paper offers a forward looking analysis of how agentic AI may affect consumer behaviour in digital markets (including the risks and challenges associated with agentic use), and how existing consumer protection law applies in this new context. The CMA Guidance also sets out several key considerations for businesses using (or considering) AI agents in consumer-facing interactions.

    From these, three key themes stand out:

    Consumer law applies in full, regardless of automation: Consumer protection law requires businesses to treat customers fairly. This is irrespective of whether a consumer interacts with a person or an AI agent. The CMA Guidance is clear: businesses remain responsible for what an AI agent does in the same way they are responsible for what an employee does. This applies even if a third party designed or provided the AI agent for the business to use. The CMA has indicated that it is likely to be unlawful under consumer protection law if an AI agent steers, pressures or misleads consumers in a way that harms a consumer's economic interests.

    Agentic AI use comes with risks, challenges (and benefits): The CMA Policy Paper highlights specific risks that are heightened by agentic AI use, including:

    • AI agents steering consumers towards outcomes that benefit a business rather than a consumer;
    • errors (including hallucination) or biased outputs; and
    • consumers over-relying on AI agents and deferring too easily to the provided automated decisions and becoming less able to scrutinise an AI-driven result.

    The CMA Policy Paper considers the example of an AI agent which monitors prices, negotiates offers, switches providers, and executes purchases on a consumer's behalf. The case study sets out both the potential benefits (reduced cognitive load, time savings, accessible optimisation) and the consumer law risks that businesses using agents might face in these circumstances (lack of transparency about limitations of the agent, erroneous switching decisions, hidden incentives). Safeguards identified by the CMA to mitigate these risks include:

    • taking accountability for (including having liability for) the AI agent's actions;
    • training the agent to provide clear disclosure of important information;
    • ensuring the AI agent gets 'mandatory confirmation' from users for high-risk actions (although what this may look like in practice is not specified);
    • maintaining robust audit logs that show rapid intervention and remedial actions when errors occur; and
    • using human oversight to monitor performance, complaints and feedback, with clear processes to resolve or escalate them as appropriate.

    Design choices matter: A consistent theme in the CMA Policy Paper is that risk is often driven by design. How an AI agent is trained (including how it adapts its behaviour to respond to a user), what information it presents (or withholds) to consumers, and how much control a consumer retains when using an agent are all relevant to compliance.

    The CMA has made clear that businesses should embed consumer protection principles into the design of any agentic AI system to ensure it can build trust, scale responsibility and so a business can trust the reliability of outcomes delivered to customers. The CMA cross-refers to its previous work on online choice architecture signalling that constant personalisation could mean steering is not as visible to consumers heightening the risk dark patterns are deployed at scale.

    Building on earlier CMA work and the UK Government’s wider AI strategy

    The CMA's work aligns with the UK Government’s broader AI Opportunities Action Plan, led by DSIT (which is referenced in the CMA Policy Paper). That plan emphasises responsible investment in the foundations of AI, pushes for cross-economy AI adoption, and seeks for the UK to be a "national champion at the frontier of AI capabilities" as a core pillar enabling economic growth. Taken as a whole, the message is clear: the UK wants businesses to innovate with AI, but not at the expense of consumer protection or market integrity. Agentic AI is viewed as an opportunity however, business must ensure existing consumer protection are taken seriously throughout each of the design, training, deployment, use and monitoring stages.

    Looking ahead

    Given the rapid expansion in both the capabilities and adoption of AI and agentic systems, the CMA Policy Paper and CMA Guidance are helpful contributions to understanding how the UK's consumer protection regime may apply to address the challenges and risks created by the rapid technological development in this field.

    For businesses, the takeaway is not to pause innovation, but rather to ensure consumer law considerations are embedded into their tools and agents from the start, such that they can be trusted (both by the business and consumers) to deliver fair and transparent outcomes. We expect further CMA engagement in this space as agentic systems become more prevalent, particularly as the CMA begins to exercise its new direct consumer enforcement powers.

    UK Updates

    1. ICO and FCA: A joint statement to help with navigating existing regulatory expectations

    On 27 March 2026, the ICO and FCA published a joint statement on existing regulatory expectations regarding financial firms' approaches to vulnerability related data (the Statement). It intends to help firms understand and apply the FCA's expectations for delivering good outcomes for retail consumers in vulnerable circumstances under the Consumer Duty framework in a way that is in alignment with the ICO's expectations on the lawful, fair and responsible use of personal information.

    The Statement sets out the FCA and ICO expectations and brief illustrative examples around data processing in terms of:

    • Supporting consumers in vulnerable circumstances;
    • Sharing vulnerability related data appropriately across the distribution chain; and
    • Monitoring outcomes for these consumers.

    Notably, the Statement also specifically calls out the substantial public interest condition under Article 9 UK GDPR as a potentially relevant lawful basis for firms processing special category personal data to support consumers in vulnerable circumstances (and that when relying on the substantial public interest condition, the related conditions under the Data Protection Act 2018 of safeguarding children and individuals at risk and safeguarding of economic well-being of certain individuals may be relevant).

    The FCA and ICO acknowledge that they will continue to work together, including through the Digital Regulation Cooperation Forum (DRCF) and the UK Regulators Network (UKRN), to help ensure that regulatory expectations are clear. The ICO also welcomes continued engagement with stakeholders to help identify whether further clarity on data protection requirements may be useful.

    2. Update on children's privacy

    As we have previously reported in Data Bytes (for example here), children's privacy is a key area of focus for the ICO. Over the past month, the ICO has made a strong statement about what constitutes appropriate and sufficient age assurance mechanisms for online services.

    • On 23 February 2026, the ICO issued a £14,472,500 penalty to Reddit for processing children's personal data without a lawful basis (through its failure to implement appropriate age assurance mechanisms) and failing to conduct a DPIA focussing on the risks of processing children's data during a period in which it knew children were accessing its services. This penalty follows the MediaLab fine (see our previous Data Bytes here) and is part of the wider ICO review of children's safety online. On 1 April 2026, the ICO confirmed that Reddit had appealed the penalty to the First-tier tribunal.
    • On 12 March 2026, the ICO published an open letter calling on all social media and video-sharing platforms to urgently implement viable age assurance technologies, warning that the age self-declaration used by many services is easy to circumvent, is not effective and puts children's online safety at risk.
    • On 25 March 2026, the ICO and Ofcom, in its capacity as Online Safety Regulator, published a joint statement setting out aligned expectations on age assurance under the UK data protection law and the Online Safety Act 2023. The joint statement confirms that self-declaration and profiling are ineffective tools when used in isolation and that services need to implement effective age assurance mechanisms or risk being in breach of UK data protection law and the Online Safety Act 2023. Finally, the statement notes that if services are unsure about users' ages the base premise should be that the Children's Code applies.

    Together, these developments send a clear message that the ICO expects robust, technology-driven age-assurance mechanisms to be implemented by organisations. Organisations which do, or may, process children's personal data should review their platforms' age-assurance measures, DPIAs which consider the risks of processing children's data, and lawful bases for processing children's data.

    3. Recent ICO guidance and consultations

    The ICO has released a flurry of new guidance and consultations over the past few weeks. Much of this stems from changes which are being brought in under the Data Use and Access Act 2025 (DUAA).

    • On 27 February 2026, the ICO launched a public consultation on draft updates to its guidance on the research, archiving and statistics provisions in the DUAA, specifically the revised criteria for scientific research, on which the ICO had received requests for greater clarity, as well as on the ICO's position regarding the new disproportionate effort exemption introduced by section 77 of the DUAA. The consultation is open until 27 April 2026. Responses can be submitted via the ICO's online survey on Citizen Space or by email to researchguidance@ico.org.uk.
    • On 23 March 2026, the ICO updated its guidance on lawful basis to include a section on recognised legitimate interests. Recognised legitimate interest was introduced by the DUAA as a lawful basis which can be relied on when processing personal data in specific and limited circumstances which relate to national security, public security, defence, crime prevention, etc..
    • Also on 23 March 2026, the ICO updated its guidance on Purpose Limitation to reflect amendments introduced by the DUAA including provisions on compatibility and the reuse of personal data.
    • On 31 March 2026, the ICO launched a public consultation on draft updates to its guidance about automated decision making (including profiling). The consultation is open until 29 May 2026. Responses can be submitted via the ICO's online survey on Citizen Space or by email to ai@ico.org.uk.

    In Case You Missed It - UK

    1. On 17 March 2026, the ICO published a blog post setting out its findings from audits of police forces' use of facial recognition technology, emphasising the need for strong data protection governance, bias testing and staff training, and advocating that any new legal framework for biometrics proposed by the Home Office must build on, rather than replace, existing data protection law. See more details here.

    2. On 19 March 2026, the House of Lords Library published a briefing on cyber security and the UK government, providing an overview of cyber-attacks, principal threat actors and the evolving cyber threat landscape, and outlining government initiatives to enhance cyber resilience including the Government Cyber Action Plan, and the Cyber Security and Resilience (Network and Information Systems) Bill currently progressing through Parliament. See more details here.
    3. On 26 March 2026, the ICO announced that it had issued a £100,000 fine to TMAC Ltd for making over 260,000 unsolicited marketing calls between February and September 2024 to individuals whose numbers were registered on the Telephone Preference Service, in breach of regulations 21 and 24 of the Privacy and Electronic Communications Regulations (PECR). See more details here.
    4. On 31 March 2026, the ICO released a report on the use of automation in the recruitment process. The report sets out the ICO's expectations for all businesses using automated decision making in the hiring processes: (1) proactively monitor for bias; (2) be transparent with applicants about the use of automated decision making; and (3) tell candidates how to exercise their right to challenge automated decisions.

    EU Updates

    Commission publishes Draft Guidance on the Cyber Resilience Act

    On 3 March 2026, the European Commission ("EU Commission") published its draft guidance for the Cyber Resilience Act ("CRA Draft Guidance") to assist economic operators, in particular SMEs.

    The CRA Draft Guidance provides guidance for manufacturers and other economic operators placing regulated products on the EU market, which includes standalone software, hardware with embedded software, remote data processing solutions; it further provides guidance on the CRA definitions, free and open-source software and dealing with substantial modifications, as well as related support periods.

    The CRA Draft Guidance deals with incident notifications to the CSIRT and ENISA which must be as soon as a manufacturer "becomes aware that a threat actor has actively exploited vulnerabilities or severe incidents. According to the CRA Draft Guidance, a manufacturer is "aware" once it has conducted an initial assessment and is reasonably certain that a vulnerability is actively exploited, or a severe incident has occurred. The CRA Draft Guidance stresses that a manufacturer must carry out the initial assessment promptly after receiving initial indications of a vulnerability exploit.

    Economic operators placing products with digital elements on the EU market may use the CRA Draft Guidance to assess their compliance obligations and prepare for the phased CRA application. According to the timelines set out in the CRA, economic operators need to be ready for reporting vulnerability exploits from 11 September 2026 onward (Art. 14 CRA), whereas the CRA will apply in full from 11 December 2027.

    AI Office publishes second draft Code of Practice on transparency of AI-generated content

    On 5 March 2026, the AI Office has published its second draft of the Code of Practice on Transparency of AI-Generated Content under the AI Act ("Code of Practice on AI"). The Code of Practice on AI aims to facilitate compliance with the transparency obligations for providers and deployers of AI systems.

    The Code of Practice on AI introduces a multi-layered marking approach for AI-generated content, requiring providers to implement at least two layers of machine-readable active marking, including digitally signed metadata and invisible watermarks embedded in the content. Further, providers shall make available detection mechanisms that allow users, deployers and third parties to verify whether certain content was generated or manipulated by their AI system.

    For deployers of AI systems, the Code of Practice on AI prescribes clear labelling requirements for AI-generated content, deep fakes and text publications. Deployers shall use a uniform EU icon containing the capitalised acronym "AI", supplemented where appropriate with text such as "Generated with AI", "Made by AI" or "Manipulated with AI".

    The Code of Practice on AI may serve as a voluntary guiding document for demonstrating compliance, whereas it does not constitute conclusive evidence of complying with the AI Act obligations. Providers and deployers of generative AI systems should assess their marking and labelling processes against the requirements under the Code of Practice on AI.

    CJEU limits DSAR: Controllers can reject a first GDPR access request as "excessive"

    On 19 March 2026, the CJEU has ruled that a controller can reject a first data subject access request (DSAR) as "excessive" under Art. 12 para. 5 GDPR, if the data subject acts in an abusive manner with the intention to use the DSAR to claim damages. A data subject based in Austria had signed up for the newsletter of a German controller and, 13 days later, submitted a DSAR. The controller rejected that first and only request as abusive. After the data subject pursued the request and additionally claimed EUR 1,000 in non-material damages, the controller filed a declaratory action in court arguing that the individual had no right to damages; the data subject submitted a counterclaim seeking EUR 1,000 in non-material damages for the refusal to grant access to personal data. The district court requested a preliminary ruling of the CJEU.

    The controller must prove two cumulative conditions to reject the first DSAR as excessive. First, the access request must go beyond the actual purpose of the right of access – which is to verify whether and how the controller is processing personal data. Second, the controller must prove that the data subject submitted the request not to check lawfulness, but to artificially create the conditions for a claim for damages. Relevant indicators include whether the data subject voluntarily provided the data, the timing of the request, the data subject's overall conduct, and publicly available information showing a pattern of systematic requests and claims for damages against multiple controllers – provided other circumstances confirm abusive intent.

    The ruling rectifies the previous interpretation of Art. 12 para. 5 GDPR that only repeated or serial requests could allow a controller to reject a DSAR as "excessive".

    In Case You Missed It - Europe

    On 23 February 2026, the EDPB and 60 other data protection authorities have published a joint statement setting out expectations for organisations developing and deploying AI image and video generation systems to avoid harm to individuals and protect their data protection rights.

    The EDPB Support Pool of Experts has published a study on the data broker market in Belgium, identifying eight categories of data brokers and providers and offering a transferable methodology for other EU supervisory authorities to identify data brokers and conduct similar investigations in their own jurisdictions.

    On 3 March 2026, TikTok has commenced proceedings before the Irish High Court challenging the Irish Data Protection Commission's (DPC) EUR 530 million fine and order to suspend data transfers of European users' personal data to China.

    On 10 March 2026, the EDPB and EDPS have published a joint opinion on the proposed European Biotech Act, welcoming its objective and recommending a series of targeted amendments.

    In Case You Missed It - Germany

    On 11 March 2026, the German Federal Government submitted its draft AI Market Surveillance and Innovation Promotion Act (KI-MIG) to implement the EU AI Act, designating the Federal Network Agency (Bundesnetzagentur) as the principal market surveillance authority and central contact point, establishing at least one regulatory sandbox, and introducing administrative fines of up to EUR 50,000 for breaches of the AI Act that are not covered by AI Act fines.

    On 6 March 2026, the registration deadline has elapsed for "essential" or "important" entities domiciled in Germany to register with the Federal Office for Information Security (BSI) under the NIS2 Directive.

    On 10 March 2026, the German Data Protection Conference (DSK) has published its proposal [in German only] for various amendments to the GDPR, including greater accountability for manufacturers and providers of AI systems, stronger protection for minors, and safeguards for data subjects' rights when controllers deploy AI.

    In Case You Missed It - France

    Conseil d'Etat confirms CNIL and CJEU distinction between pseudonymised and anonymised data

    On 13 February 2026, the Conseil d'Etat confirmed the position of the CNIL, aligned with the case law of the CJEU, that pseudonymised data remains personal data, whereas truly anonymised data falls outside the scope of the GDPR. See the full case here.

    Conseil d'Etat tempers the CNIL's decision on Amazon, reducing the fine and holding that Amazon's employee monitoring is justified by organisational needs, not disciplinary purposes

    On 23 December 2025, the Conseil d'Etat partially overturned the CNIL's decision, which had imposed a €32 million fine on Amazon on the grounds that its employee monitoring system was excessively intrusive and could not be justified by the company's legitimate interests, particularly as it enabled detailed tracking of workers' performance. The Conseil d'Etat held that some of the monitoring tools served legitimate organisational and operational purposes rather than disciplinary ones, thereby adjusting the legal assessment of the processing. The court upheld the sanction in part (notably relating to data minimisation) but reduced the fine to €15 million. This ruling confirms that certain employee monitoring processes may be lawful if clearly justified by operational needs, but companies should carefully document purposes to avoid regulatory sanctions. See the full case here.

    CNIL and HAS seek feedback on draft guide for AI in health care

    On 16 February 2026, the CNIL and the High Authority for Health (HAS), as part of their new partnership, launched a public consultation on draft guidance designed to assist healthcare establishments and professionals in the appropriate use of artificial intelligence systems deployed in healthcare settings. Healthcare sector stakeholders and AI system providers are encouraged to review the draft and submit their contributions before the 16 April 2026 deadline, as it will shape future compliance expectations in a highly regulated sector. See more details here.

    In Case You Missed It - Italy

    On 12 March 2026, the Italian Data protection Authority (IDPA) fined Intesa Sanpaolo €17.628 million after finding that its profiling of approximately 2.4 million customers for transfer to Isybank lacked a valid legal basis and was not supported by sufficiently transparent notices. The Italian Competition Authority had already investigated the case, which was closed following commitments adopted by the bank. See the IDPA press release here and the full decision here.

    On 12 March 2026, the European Commission and the European Data Protection Board published the individual submissions received in response to the public consultation on the draft joint guidelines on the interplay between the Digital Markets Act and the GDPR. See the Commission update here.

    The information provided is not intended to be a comprehensive review of all developments in the law and practice, or to cover all aspects of those referred to.
    Readers should take legal advice before applying it to specific issues or transactions.