Legal development

Data Bytes 64: Your UK and European Data Privacy update for February 2026 

graph and lights background

    Welcome back to this months’ edition of Data Bytes. In this months’ edition you will see that two key themes continue to be top of the agenda for regulators around the world: the risks associated with AI and a focus on keeping children, safe online, in a rapidly changing digital age. 

    The spotlight (and therefore our spotlight!) is firmly on agentic AI, with both the UK's ICO and Singapore's IMDA publishing frameworks to address its distinct risks. While the ICO takes an exploratory approach, flagging concerns around controllership and automated decision-making, Singapore offers more concrete, actionable guidance. 

    Children's privacy continues to dominate enforcement. The ICO fined MediaLab nearly £250,000 for Imgur's failure to implement effective age assurance, and Reddit received a hefty £14.47 million penalty for similar shortcomings. Meanwhile, the UK Government is moving to close gaps in the Online Safety Act, particularly around chatbots and intimate image abuse. The direction of travel is unmistakable: platforms must embed robust protections from the outset.

    Finally, the new ICO complaints guidance deserves attention: organisations must ensure their processes are compliant by 19 June 2026 with the new complaints requirements from the Data Use and Access Act and the ICO is clear on its expectations. 

    Get your Data Bytes here. 

    Spotlight on Agentic AI UK and Singapore Regulators Mapping out the Way 

    It’s still early in 2026 but we are seeing a theme emerge as regulators across the world hone in on Agentic AI and the data protection issues which it presents.  In this months’ spotlight we look at publications from the ICO and Singapore Infocomm Media Development Authority, on the challenges of the regulation of this emerging technology and compare their approach.

    UK

    January saw the ICO publish its Tech Futures Report on agentic AI (ICO Agentic AI Report), setting out its understanding of agentic AI, identifying potential use cases and flagging key data protection issues. Building on the ICO's generative AI consultation series over 2024, the report identifies novel risks including: challenges in determining controllership designation; increased automated decision-making; overly broad processing purposes leading to unintended data use (including special category data); reduced transparency affecting individuals' ability to exercise their rights; and new cyber security threats. 

    No binding guidance (yet): While the ICO Agentic AI Report does not set out the ICO's official guidance or formal regulatory expectations on agentic AI, it is a helpful indicator on the ICO's current position on the key risks and existing frameworks relevant to agentic AI deployment. It also notes the ICO's active monitoring and intended work in relation to agentic AI, including its intention to: 

    • provide clarity on its regulatory expectations around agentic AI as part of the development of its code of practice on AI and automated decision making, having identified the increased potential automated decision-making as a novel agentic AI risk;
    • working with the Digital Regulation Cooperation Forum to understand the cross-regulatory implications of agentic AI, and continuing its work with international partners through the G7 Data Protection Authorities Emerging Technologies Working Group; and
    • hosting workshops with the industry to gather further information on agentic AI, including on agentic capabilities and adoption, how industry is mitigating data protection and privacy risks, and inviting stakeholders working on agentic AI applications to access its innovation support services. 

    Preparing for the dawn of a new normal?: In its related press release, the ICO highlighted how the rise in agentic AI can be transformational in peoples' day-to-day, flagging that personal shopping "AI-agents" are potentially arriving within the next five years and "so-called agentic commerce is set to become a mainstay of peoples' busy lives". It notes that setting out strong data protection foundations in the advent of this new agentic AI era can "help build…public trust and…scale the fast and safe adoption of AI".

    The Digital Regulation Cooperation Forum also recognised opportunities and risks of agentic AI through the release of a webinar on the topic in December 2025. We expect further updates and potentially new guidance from these UK regulators in the coming months.

    Singapore

    In January 2026, the Singapore Infocomm Media Development Authority (IMDApublished its Model Governance Framework for Agentic AI (IMDA Agentic AI Framework), building on earlier frameworks from 2020 and 2024. The framework identifies risks including: agents' access to sensitive data and potential for data exfiltration; agents being exploited to reveal information or failing to recognise sensitive data; unpredictable outcomes from increased autonomy and reduced human oversight; accountability challenges in multi-agent ecosystems; and cascading errors across connected systems. 

    Unlike the ICO's exploratory approach, the IMDA framework offers concrete, actionable recommendations across four dimensions: assessing and bounding risks through guardrails and constraints; establishing clear accountability chains with meaningful human oversight; implementing technical controls and continuous monitoring; and enabling end-user responsibility through proper education and training. The framework is designed to complement Singapore's existing AI governance ecosystem, and the IMDA has invited public feedback to refine its approach. 

    How do they compare? 

    Both publications reflect proactive regulatory attention to agentic AI's distinct challenges, acknowledging novel data protection risks. Their regulatory lenses differ: the ICO report anchors in data protection law and the UK GDPR, whilst the IMDA framework adopts a broader technology governance perspective addressing system-level risks. Organisations deploying agentic AI across jurisdictions should consider the complementary insights each offers and monitor developments as more detailed guidance is likely to emerge this year.

    UK Updates

    ICO Publishes Data Protection Complaints Guidance 

    On 12 February 2026, the ICO published its data protection complaints guidance setting out what organisations need to do to meet the new requirements under the Data (Use and Access) Act to have a data protection complaints process. 

    Specific call outs from the guidance are: 

    • Organisations do not need a specific data protection complaints process; an existing complaints process can be used provided it meets the statutory obligations.
    • A complaint cannot be refused simply because an individual has not followed the set process. Employees may receive complaints directly, so organisations should ensure all staff are trained on the complaints process to ensure these are appropriately handled. 
    • Acknowledgement of complaints must be provided within 30 days (starting on the day after the complaint is received), with an extension to the next working day if the deadline falls on a weekend or public holiday.
    • The obligation to investigate "without undue delay" triggers from receipt of the complaint, not the date of the acknowledgement. Individuals must be kept informed during the investigation and notified of the outcome without unjustifiable or excessive delay.
    • The right to complain to the organisation must be referenced in privacy notices and responses to data subject rights requests stated separately from the right to complain to the ICO. 

    Although these requirements do not come into force until 19 June 2026, organisations should review their current complaints process to assess how it can be adapted and start to consider how documents (e.g, privacy notices and overarching data protection policies), internal procedures (e.g., complaints handling procedures) and data protection training need to be updated to ensure readiness for the June deadline. 

    ICO fines Imgur owner MediaLab for children's privacy failures

    On 5 February 2026, the ICO announced it has fined MediaLab.AI, Inc. (owner of image sharing and hosting platform Imgur), £247,590 for unlawfully processing children’s personal data. The ICO’s investigation found that from September 2021 to September 2025, the company breached the UK GDPR by collecting and using children’s information without appropriate safeguards. 

    The regulator determined that MediaLab failed to introduce effective age assurance measures, processed data relating to children under 13 without obtaining parental consent or identifying another lawful basis and did not carry out a data protection impact assessment to assess and mitigate privacy risks. Although Imgur’s terms stated that children under 13 required parental supervision, the company had no practical systems to verify users’ ages or secure the necessary consent, contrary to UK legal requirements. 

    When calculating the fine, the ICO took into account the number of affected children, the potential harm involved, the duration of the breaches, and MediaLab’s global turnover. It also noted the company’s commitment to remedy the issues should Imgur resume operations in the UK. 

    This enforcement action highlights the ICO’s ongoing priority of protecting children’s privacy online and signals that social media and content-sharing platforms must implement robust age checks, obtain parental consent where required, and conduct data protection impact assessments. The ICO has indicated that this penalty forms part of a wider programme looking to improve the safety of children’s personal information online, warning that organisations failing to meet their obligations can expect similar enforcement measures. 

    Mind the Gap in the Online Safety Act: UK Government's latest crackdown on online safety

    On 15 February 2026, the UK Government published a press release, outlining action it is taking to keep children safe online amid rapid technological developments. It intends to: 

    • Close loopholes in the Online Safety Act on chatbotsThrough tabling an amendment to the Crime and Policing Bill, the UK Government intends to require chatbots not currently in the scope of the Online Safety Act to protect their users from illegal content; and
    • Focus on the protection of children's digital wellbeing: The press release notes new powers are also tabled for the Children's Wellbeing and Schools Bill to enable the UK Government to introduce targeted actions off the back of the upcoming children's digital wellbeing consultation, along with another amendment to the Crime and Policing Bill to give effect to measures around the preservation of child social media data. 

    These actions come off the heels of UK regulatory investigations into X, on the reports of the Grok AI chatbot being used to create sexual deepfakes of individuals, including children. See last month's last update on the ICO investigation [link to last data bytes]. In early February, Ofcom also published its steps in its investigation on X, noting that, because of the way the Online Safety Act relates to chatbots, it is currently unable to investigate the creation of illegal images by the standalone Grok AI chatbot, highlighting a gap in the legal framework. 

    A couple of days after the press release, the UK Government also announced a proposed UK law to require tech platforms to remove intimate images which have been shared without consent within 48 hours, noting that tackling intimate image abuse should be treated with the same severity as child sexual abuse material and terrorist content.

    It is clear the UK Government is looking to ensure the UK legal framework is able to keep people, particularly children, safe online, in a rapidly changing digital age.

    In case you missed it: 

    UK

    1. On 2 February 2026, the ICO published a letter to the Government with an update on progress against its January 2025 economic growth commitments — covering AI regulatory certainty, SME support, innovation sandboxes, privacy-preserving online advertising, and international data transfers —and outlined further plans to continue building on its impact in 2026. See more details here.
    2. On 5 February 2026, the ICO published its updated data protection by design guidance which was updated to reflect changes following the Data (Use and Access) Act 2025 and includes a new subsection on 'children's higher protection matters'. See more details here.
    3. On 5 February 2026, the ICO published its updated codes of conduct guidance to reflect changes from the Data (Use and Access) Act 2025 including to reflect that code of conducts may be developed under part 3 of the Data Protection Act 2018 and under the Privacy and Electronic Communications Regulation (PECR). See the updated guidance here.
    4. On 9 February 2026, in Spurgeon & Ors v Capita Plc [2026] EWHC 241 (KB), the High Court refused to strike out group data protection claims brought by nearly 4,000 claimants affected by a 2023 cyber-attack, rejecting the defendant's argument that the claimants' solicitors had committed an abuse of process. See the full case here.
    5. The ICO has published a response to the Home Office's consultation on a new legal framework for law enforcement use of biometrics and facial recognition technology, emphasising that data protection law must remain central, and highlighting the need for clear regulatory coherence, incorporating statutory consultation duties, and developing a memoranda of understanding to reduce risk of diverging approaches by oversight bodies. See the full response here.
    6. On 11 February 2026, the ICO announced that two additional persons had been convicted in connection with the ICO's investigation into the unlawful accessing and sale of personal data from garages, and claims management and insurance companies. See the announcements here and here.
    7. On 19 February 2026, the ICO welcomed the Court of Appeal's ruling in its favour against DSG Retail Limited, which confirms that organisations must take appropriate security measures to protect all personal data they process against unauthorised access, regardless of whether hackers could identify individuals from the exfiltrated data. See the statement from the ICO here.
    8. On 24 February 2026, the ICO published its decision to fine Reddit, Inc. (Reddit) £14.47 million after finding the company failed to use children's personal information lawfully, namely that Reddit failed to apply any robust age assurance mechanism and failed to carry out a DPIA to assess and mitigate risks to children before January 2025.

    EU Updates

    EDPB and EDPS publish Joint Opinion on the Digital Omnibus Proposal and Council rumoured 

    On 10 February 2026, the European Data Protection Board (EDPB) and the European Data Protection Supervisor (EDPS) have published their Joint Opinion 2/2026 on the European Commission's Digital Omnibus proposal. While the EDPB and EDPS welcome certain simplification measures, they express significant concerns regarding the proposed changes to (a) the definition of personal data and (b) the new Article 88c GDPR on legitimate interest in the context of the development and operation of AI. 

    Definition of Personal Data: The authorities strongly oppose the proposed amendment, which would state that information does not become personal for an entity merely because a subsequent recipient could identify individuals. They argue this contradicts CJEU case law (particularly EDPS v SRB), would narrow the concept of personal data, and could enable controllers to exploit loopholes by artificially separating processing activities from identification means. They also oppose the proposed Article 41a GDPR, which would empower the Commission to specify when pseudonymised data ceases to be personal data. 

    Legitimate Interest for AI (Article 88c GDPR): The EDPB and EDPS consider the proposed provision unnecessary, given existing guidance confirms legitimate interest may already serve as a legal basis for AI development. If retained, they recommend: requiring controllers to conduct the full three-step legitimate interest assessment; adding the "unconditional right to object" to Article 21 GDPR; defining "operation of an AI system"; and clarifying enhanced transparency requirements. 

    Further Topics: The authorities welcome the harmonised definition of "scientific research" and support new exceptions for biometric identity verification where templates remain under data subjects' sole control. They support simplified transparency obligations for SMEs but insist data subjects must still obtain full information upon request. On automated decision-making, they urge retaining the prohibition-in-principle structure. Regarding data breaches, they support raising notification thresholds and extending deadlines but recommend the EDPB—rather than the Commission—adopt common templates. Finally, whilst supporting measures to address cookie consent fatigue, they warn that splitting rules between GDPR and the ePrivacy Directive may create legal uncertainty. 

    Council Response: According to the compromise text leaked on 20 February 2026, the Council of the European Union is proposing to respond to this criticism. The Council proposes to keep the definition of personal data unchanged and to strike the provision which would empower the Commission to adopt implementing acts that specify criteria when pseudonymised data should not be considered personal data. The compromise text does not propose amending Article 88c GDPR. 

    European Commission publishes updated FAQs on the Data Act 

    On 22 January 2026, the European Commission has published version 1.4 of its Frequently Asked Questions on the Data Act. The FAQs support the ongoing implementation of the Data Act. Key changes in version 1.4 include: 

    • Data quality: The FAQs now clarify that "same quality" means data that is "accurate, complete, reliable, relevant and up-to-date". Data Holders must share data of the same quality under their data access and sharing obligations.
    • Union standards repository: The European Commission clarifies that the interoperability obligation does not extend to provider's entire service, but specifically to those interfaces (e.g. API's) that customers use for data exchange and switching.
    • Maritime vessels, aircrafts and vehicles: The European Commission clarifies that maritime vessels, aircraft or vehicles (and similar products) are to be considered as "placed on the market" only when they are released for free circulation with the customs status of Union goods. The mere presence in EU territory or waters (including transit) does not trigger respective obligations under the Data Act obligations for owners or operators of such connected products.
    • Further clarification regarding Model Contractual Terms for data-sharing and Standard Contract Clauses for cloud-switching: The European Commission references its new Model Contractual Terms for data access and use (governing IoT data sharing under Chapters II-IV Data Act) as well as Standard Contractual Clauses for cloud computing contracts (governing switching between data processing services under Chapter VI Data Act) in the FAQ. 
    • Reasonable compensation guidance: The European Commission expects to adopt guidelines on the calculation of reasonable compensation for data sharing under Chapter III by Q2/Q3 2026. The European Commission will consult the European Data Innovation Board before the adoption. 

    CSA 2.0 Proposal 

    On 20 January 2026, the European Commission has published its draft for a comprehensive revision of the Cyber Security Act ("CSA 2.0"), which is intended to fully replace and repeal the current CSA. The CSA 2.0 aims to simplify previous voluntary certification procedures for ICT products, ICT services, or ICT processes. 

    Under the CSA 2.0, companies will be able to have their "Cyber Posture" certified. This allows companies to demonstrate their cybersecurity level – not just for individual products or services, but with regard to their specific cybersecurity compliance requirements across the entire organization. 

    The CSA 2.0 introduces an EU-wide framework for trusted ICT supply chains, enabling companies to reduce the "non-technical risks" originating from their suppliers (e.g., economic espionage directed from third countries, malicious cyber activities, or state-sponsored cyberattacks). 

    The CSA 2.0 further expands the role of ENISA by extending ENISA's certification tasks, entrusting ENISA with the creation and maintenance of a register of essential and important entities, and significantly strengthening its role in operational cooperation with various institutions within the cybersecurity ecosystem. 

    Germany Update

    German Higher Regional Courts award four-figure damages for Meta's unlawful data collection via Business Tools 

    On 3 February and 5 February 2026, the Higher Regional Courts of Dresden and Naumburg ruled that Meta Platforms' collection of personal data via its Business Tools contravenes GDPR provisions. The courts ordered damages of EUR 1,500 per plaintiff in Dresden and EUR 1,200-1,250 per plaintiff in Naumburg. Both courts held that Meta's invisible tracking pixels and APIs on third-party websites and apps, which enables Meta to create user profiles, requires consent and violates the data minimisation principle. The courts further held that Meta had processed personal data for purposes unrelated to the purposes for which it had originally collected the personal data concerned, and that the tracking also targets individuals who do not hold an account with a Meta platform service (such as Facebook or Instagram). 

    According to these court rulings, a data subject can claim non-material damages in a four-digit amount for the alleged loss of control of its personal data, merely on the basis of unlawful profiling as such, without need to prove further negative consequences. These legally binding decisions stand in a series of court actions against Meta across various German higher regional courts, which so far have been adjudicated against Meta. Notably, the courts have excluded an appeal to the BGH. However, further cases regarding Meta's Business-Tools pending with the Higher Regional Court of Munich are currently on appeal with the BGH, which may lead to further guidance going forward.

    For providers of social media platforms, these decisions indicate the increasing exposure to collective enforcement actions where groups of data subjects may recover meaningful damages without needing to establish detailed evidence as to their own personal situation. Companies using social media accounts for promoting their products and services should audit their use of third-party tracking tools, ensure valid consent mechanisms are in place, and review their data processing agreements with platform providers.

    France Update

    CNIL fines FREE MOBILE and FREE €42 million for data breaches

    On 13 January 2026, the CNIL imposed fines of €27 million and €15 million respectively on Free Mobile and Free following several data breaches affecting subscribers https://www.cnil.fr/en/sanction-free-2026.

    First, the CNIL determined that the companies had failed to implement security measures appropriate to the risks, as required under Article 32 of the GDPR. In light of the scale of the breach, the volume of data implicated, and the sensitive nature of the banking information involved, the authority concluded that the technical and organisational safeguards in place were insufficiently robust. 

    Second, the CNIL identified deficiencies in the notification of affected individuals (Article 34 of GDPR). Where a personal data breach is likely to result in a high risk to individuals’ rights and freedoms, clear and comprehensive information must be provided on the nature of the breach, its potential consequences and the mitigation measures adopted.

    Finally, the authority found a breach of the storage limitation principle (Article 5(1)(e) of GPDR), as data relating to former subscribers had been retained for longer than necessary. 

    Spain Update

    Curenergía Sanctioned by the AEPD with a fine of EUR 500,000 for Data Breach caused by a deficient chat channel procedure 

    The case originated from a complaint filed on September 28, 2023, after the claimant, a customer of CURENERGÍA COMERCIALIZADOR DE ÚLTIMO RECURSO, S.A.U. ("Curenergía"), received an email on August 2, 2023, containing personal data of another customer. This email included that customer's name, surname, the existence, and amount of a debt owed to the company, and contract and invoice reference numbers. The incident occurred due to a human error by an employee of a collaborating channel who was simultaneously attending two customers through a chat service. This employee mistakenly associated the claimant's email address with the file of a different customer. 

    The AEPD determined that Curenergía failed to implement adequate technical and organizational measures to mitigate the inherent risks of its customer service chat channel, which allowed agents to attend multiple customers simultaneously, thereby creating a high probability of errors leading to data inaccuracies. Curenergía had no procedures to detect or correct such errors before they caused harm. 

    The AEPD imposed a sanction of € 500,000 on Curenergía for infringement of Article 25 of the GDPR (privacy by design and by default), classified as a serious infringement under Article 83.4.a) of the GDPR. The initial proceedings had considered a potential violation of Article 5.1.d) of the GDPR (principle of data accuracy), but following the instruction phase and the evidence gathered, the AEPD concluded that the facts more appropriately constituted a violation of Article 25. 

    The AEPD considered the following factors when setting the fine:

    • Scope of the Deficiency: The deficiency affected a communication channel with approximately 15,030 interactions in 2023. Therefore, the potential impact extended to all of Curenergía's approximately 3 million customers who faced similar risks due to a systemic lack of preventive measures. 
    • Aggravating Factors: The breach involved debt information, and this type of data affects individuals' economic reputation. It also considered negligence concurred since Curenergía failed to conduct an adequate risk analysis for its chat channel operations. 

    The information provided is not intended to be a comprehensive review of all developments in the law and practice, or to cover all aspects of those referred to.
    Readers should take legal advice before applying it to specific issues or transactions.