Data Bytes 64: Your UK and European Data Privacy update for February 2026
Welcome back to this months’ edition of Data Bytes. In this months’ edition you will see that two key themes continue to be top of the agenda for regulators around the world: the risks associated with AI and a focus on keeping children, safe online, in a rapidly changing digital age.
The spotlight (and therefore our spotlight!) is firmly on agentic AI, with both the UK's ICO and Singapore's IMDA publishing frameworks to address its distinct risks. While the ICO takes an exploratory approach, flagging concerns around controllership and automated decision-making, Singapore offers more concrete, actionable guidance.
Children's privacy continues to dominate enforcement. The ICO fined MediaLab nearly £250,000 for Imgur's failure to implement effective age assurance, and Reddit received a hefty £14.47 million penalty for similar shortcomings. Meanwhile, the UK Government is moving to close gaps in the Online Safety Act, particularly around chatbots and intimate image abuse. The direction of travel is unmistakable: platforms must embed robust protections from the outset.
Finally, the new ICO complaints guidance deserves attention: organisations must ensure their processes are compliant by 19 June 2026 with the new complaints requirements from the Data Use and Access Act and the ICO is clear on its expectations.
Get your Data Bytes here.
It’s still early in 2026 but we are seeing a theme emerge as regulators across the world hone in on Agentic AI and the data protection issues which it presents. In this months’ spotlight we look at publications from the ICO and Singapore Infocomm Media Development Authority, on the challenges of the regulation of this emerging technology and compare their approach.
January saw the ICO publish its Tech Futures Report on agentic AI (ICO Agentic AI Report), setting out its understanding of agentic AI, identifying potential use cases and flagging key data protection issues. Building on the ICO's generative AI consultation series over 2024, the report identifies novel risks including: challenges in determining controllership designation; increased automated decision-making; overly broad processing purposes leading to unintended data use (including special category data); reduced transparency affecting individuals' ability to exercise their rights; and new cyber security threats.
No binding guidance (yet): While the ICO Agentic AI Report does not set out the ICO's official guidance or formal regulatory expectations on agentic AI, it is a helpful indicator on the ICO's current position on the key risks and existing frameworks relevant to agentic AI deployment. It also notes the ICO's active monitoring and intended work in relation to agentic AI, including its intention to:
Preparing for the dawn of a new normal?: In its related press release, the ICO highlighted how the rise in agentic AI can be transformational in peoples' day-to-day, flagging that personal shopping "AI-agents" are potentially arriving within the next five years and "so-called agentic commerce is set to become a mainstay of peoples' busy lives". It notes that setting out strong data protection foundations in the advent of this new agentic AI era can "help build…public trust and…scale the fast and safe adoption of AI".
The Digital Regulation Cooperation Forum also recognised opportunities and risks of agentic AI through the release of a webinar on the topic in December 2025. We expect further updates and potentially new guidance from these UK regulators in the coming months.
In January 2026, the Singapore Infocomm Media Development Authority (IMDA) published its Model Governance Framework for Agentic AI (IMDA Agentic AI Framework), building on earlier frameworks from 2020 and 2024. The framework identifies risks including: agents' access to sensitive data and potential for data exfiltration; agents being exploited to reveal information or failing to recognise sensitive data; unpredictable outcomes from increased autonomy and reduced human oversight; accountability challenges in multi-agent ecosystems; and cascading errors across connected systems.
Unlike the ICO's exploratory approach, the IMDA framework offers concrete, actionable recommendations across four dimensions: assessing and bounding risks through guardrails and constraints; establishing clear accountability chains with meaningful human oversight; implementing technical controls and continuous monitoring; and enabling end-user responsibility through proper education and training. The framework is designed to complement Singapore's existing AI governance ecosystem, and the IMDA has invited public feedback to refine its approach.
Both publications reflect proactive regulatory attention to agentic AI's distinct challenges, acknowledging novel data protection risks. Their regulatory lenses differ: the ICO report anchors in data protection law and the UK GDPR, whilst the IMDA framework adopts a broader technology governance perspective addressing system-level risks. Organisations deploying agentic AI across jurisdictions should consider the complementary insights each offers and monitor developments as more detailed guidance is likely to emerge this year.
On 12 February 2026, the ICO published its data protection complaints guidance setting out what organisations need to do to meet the new requirements under the Data (Use and Access) Act to have a data protection complaints process.
Specific call outs from the guidance are:
Although these requirements do not come into force until 19 June 2026, organisations should review their current complaints process to assess how it can be adapted and start to consider how documents (e.g, privacy notices and overarching data protection policies), internal procedures (e.g., complaints handling procedures) and data protection training need to be updated to ensure readiness for the June deadline.
On 5 February 2026, the ICO announced it has fined MediaLab.AI, Inc. (owner of image sharing and hosting platform Imgur), £247,590 for unlawfully processing children’s personal data. The ICO’s investigation found that from September 2021 to September 2025, the company breached the UK GDPR by collecting and using children’s information without appropriate safeguards.
The regulator determined that MediaLab failed to introduce effective age assurance measures, processed data relating to children under 13 without obtaining parental consent or identifying another lawful basis and did not carry out a data protection impact assessment to assess and mitigate privacy risks. Although Imgur’s terms stated that children under 13 required parental supervision, the company had no practical systems to verify users’ ages or secure the necessary consent, contrary to UK legal requirements.
When calculating the fine, the ICO took into account the number of affected children, the potential harm involved, the duration of the breaches, and MediaLab’s global turnover. It also noted the company’s commitment to remedy the issues should Imgur resume operations in the UK.
This enforcement action highlights the ICO’s ongoing priority of protecting children’s privacy online and signals that social media and content-sharing platforms must implement robust age checks, obtain parental consent where required, and conduct data protection impact assessments. The ICO has indicated that this penalty forms part of a wider programme looking to improve the safety of children’s personal information online, warning that organisations failing to meet their obligations can expect similar enforcement measures.
On 15 February 2026, the UK Government published a press release, outlining action it is taking to keep children safe online amid rapid technological developments. It intends to:
These actions come off the heels of UK regulatory investigations into X, on the reports of the Grok AI chatbot being used to create sexual deepfakes of individuals, including children. See last month's last update on the ICO investigation [link to last data bytes]. In early February, Ofcom also published its steps in its investigation on X, noting that, because of the way the Online Safety Act relates to chatbots, it is currently unable to investigate the creation of illegal images by the standalone Grok AI chatbot, highlighting a gap in the legal framework.
A couple of days after the press release, the UK Government also announced a proposed UK law to require tech platforms to remove intimate images which have been shared without consent within 48 hours, noting that tackling intimate image abuse should be treated with the same severity as child sexual abuse material and terrorist content.
It is clear the UK Government is looking to ensure the UK legal framework is able to keep people, particularly children, safe online, in a rapidly changing digital age.
On 10 February 2026, the European Data Protection Board (EDPB) and the European Data Protection Supervisor (EDPS) have published their Joint Opinion 2/2026 on the European Commission's Digital Omnibus proposal. While the EDPB and EDPS welcome certain simplification measures, they express significant concerns regarding the proposed changes to (a) the definition of personal data and (b) the new Article 88c GDPR on legitimate interest in the context of the development and operation of AI.
Definition of Personal Data: The authorities strongly oppose the proposed amendment, which would state that information does not become personal for an entity merely because a subsequent recipient could identify individuals. They argue this contradicts CJEU case law (particularly EDPS v SRB), would narrow the concept of personal data, and could enable controllers to exploit loopholes by artificially separating processing activities from identification means. They also oppose the proposed Article 41a GDPR, which would empower the Commission to specify when pseudonymised data ceases to be personal data.
Legitimate Interest for AI (Article 88c GDPR): The EDPB and EDPS consider the proposed provision unnecessary, given existing guidance confirms legitimate interest may already serve as a legal basis for AI development. If retained, they recommend: requiring controllers to conduct the full three-step legitimate interest assessment; adding the "unconditional right to object" to Article 21 GDPR; defining "operation of an AI system"; and clarifying enhanced transparency requirements.
Further Topics: The authorities welcome the harmonised definition of "scientific research" and support new exceptions for biometric identity verification where templates remain under data subjects' sole control. They support simplified transparency obligations for SMEs but insist data subjects must still obtain full information upon request. On automated decision-making, they urge retaining the prohibition-in-principle structure. Regarding data breaches, they support raising notification thresholds and extending deadlines but recommend the EDPB—rather than the Commission—adopt common templates. Finally, whilst supporting measures to address cookie consent fatigue, they warn that splitting rules between GDPR and the ePrivacy Directive may create legal uncertainty.
Council Response: According to the compromise text leaked on 20 February 2026, the Council of the European Union is proposing to respond to this criticism. The Council proposes to keep the definition of personal data unchanged and to strike the provision which would empower the Commission to adopt implementing acts that specify criteria when pseudonymised data should not be considered personal data. The compromise text does not propose amending Article 88c GDPR.
On 22 January 2026, the European Commission has published version 1.4 of its Frequently Asked Questions on the Data Act. The FAQs support the ongoing implementation of the Data Act. Key changes in version 1.4 include:
On 20 January 2026, the European Commission has published its draft for a comprehensive revision of the Cyber Security Act ("CSA 2.0"), which is intended to fully replace and repeal the current CSA. The CSA 2.0 aims to simplify previous voluntary certification procedures for ICT products, ICT services, or ICT processes.
Under the CSA 2.0, companies will be able to have their "Cyber Posture" certified. This allows companies to demonstrate their cybersecurity level – not just for individual products or services, but with regard to their specific cybersecurity compliance requirements across the entire organization.
The CSA 2.0 introduces an EU-wide framework for trusted ICT supply chains, enabling companies to reduce the "non-technical risks" originating from their suppliers (e.g., economic espionage directed from third countries, malicious cyber activities, or state-sponsored cyberattacks).
The CSA 2.0 further expands the role of ENISA by extending ENISA's certification tasks, entrusting ENISA with the creation and maintenance of a register of essential and important entities, and significantly strengthening its role in operational cooperation with various institutions within the cybersecurity ecosystem.
On 3 February and 5 February 2026, the Higher Regional Courts of Dresden and Naumburg ruled that Meta Platforms' collection of personal data via its Business Tools contravenes GDPR provisions. The courts ordered damages of EUR 1,500 per plaintiff in Dresden and EUR 1,200-1,250 per plaintiff in Naumburg. Both courts held that Meta's invisible tracking pixels and APIs on third-party websites and apps, which enables Meta to create user profiles, requires consent and violates the data minimisation principle. The courts further held that Meta had processed personal data for purposes unrelated to the purposes for which it had originally collected the personal data concerned, and that the tracking also targets individuals who do not hold an account with a Meta platform service (such as Facebook or Instagram).
According to these court rulings, a data subject can claim non-material damages in a four-digit amount for the alleged loss of control of its personal data, merely on the basis of unlawful profiling as such, without need to prove further negative consequences. These legally binding decisions stand in a series of court actions against Meta across various German higher regional courts, which so far have been adjudicated against Meta. Notably, the courts have excluded an appeal to the BGH. However, further cases regarding Meta's Business-Tools pending with the Higher Regional Court of Munich are currently on appeal with the BGH, which may lead to further guidance going forward.
For providers of social media platforms, these decisions indicate the increasing exposure to collective enforcement actions where groups of data subjects may recover meaningful damages without needing to establish detailed evidence as to their own personal situation. Companies using social media accounts for promoting their products and services should audit their use of third-party tracking tools, ensure valid consent mechanisms are in place, and review their data processing agreements with platform providers.
On 13 January 2026, the CNIL imposed fines of €27 million and €15 million respectively on Free Mobile and Free following several data breaches affecting subscribers https://www.cnil.fr/en/sanction-free-2026.
First, the CNIL determined that the companies had failed to implement security measures appropriate to the risks, as required under Article 32 of the GDPR. In light of the scale of the breach, the volume of data implicated, and the sensitive nature of the banking information involved, the authority concluded that the technical and organisational safeguards in place were insufficiently robust.
Second, the CNIL identified deficiencies in the notification of affected individuals (Article 34 of GDPR). Where a personal data breach is likely to result in a high risk to individuals’ rights and freedoms, clear and comprehensive information must be provided on the nature of the breach, its potential consequences and the mitigation measures adopted.
Finally, the authority found a breach of the storage limitation principle (Article 5(1)(e) of GPDR), as data relating to former subscribers had been retained for longer than necessary.
The case originated from a complaint filed on September 28, 2023, after the claimant, a customer of CURENERGÍA COMERCIALIZADOR DE ÚLTIMO RECURSO, S.A.U. ("Curenergía"), received an email on August 2, 2023, containing personal data of another customer. This email included that customer's name, surname, the existence, and amount of a debt owed to the company, and contract and invoice reference numbers. The incident occurred due to a human error by an employee of a collaborating channel who was simultaneously attending two customers through a chat service. This employee mistakenly associated the claimant's email address with the file of a different customer.
The AEPD determined that Curenergía failed to implement adequate technical and organizational measures to mitigate the inherent risks of its customer service chat channel, which allowed agents to attend multiple customers simultaneously, thereby creating a high probability of errors leading to data inaccuracies. Curenergía had no procedures to detect or correct such errors before they caused harm.
The AEPD imposed a sanction of € 500,000 on Curenergía for infringement of Article 25 of the GDPR (privacy by design and by default), classified as a serious infringement under Article 83.4.a) of the GDPR. The initial proceedings had considered a potential violation of Article 5.1.d) of the GDPR (principle of data accuracy), but following the instruction phase and the evidence gathered, the AEPD concluded that the facts more appropriately constituted a violation of Article 25.
The AEPD considered the following factors when setting the fine:
The information provided is not intended to be a comprehensive review of all developments in the law and practice, or to cover all aspects of those referred to.
Readers should take legal advice before applying it to specific issues or transactions.