Legal development

Data Bytes 59: Your UK and European Data Privacy update for June 2025

Triangular Colorbond profiles

    Welcome back to our June edition of Data Bytes. The Data Bytes team are excited to announce that yesterday we have launched our new podcast series “Ashurst Data Bytes” with a series on the UK’s new Data (Use and Access) Act. You can listen to Ashurst Data Bytes here.

    In episode 1, our UK data protection team discuss the data privacy specific changes that the Act brings in and what it means for data protection practitioners. If you’re not a fan of the podcast format, keep scrolling to our spotlight section where we have summarised the discussion below.

    As many of you will be aware by now, the Act does not just change privacy law, it has more far reaching consequences concerning the uses and sharing of data. The next in the series will be focusing on the impact the new law will have on AI and its regulation in the UK. In future podcasts we will be looking at the framework for Smart Data schemes and comparisons with the EU Data Act; the new Digital ID provisions; changes to marketing and cookie laws and finally the impact on data related investigations.

    Get your data bytes here (and please listen to our podcast too!)

    Updates from the UK

    1. ICO published guidance on Internet of Things

    On 16 June, the ICO released for consultation draft guidance and a draft impact assessment on consumer Internet of Things (IoT) products and services, governing consumer IoT products, including wellbeing products (e.g., fitness trackers), home entertainment products (e.g., smart speakers), home automation devices (e.g., smart lightbulbs), domestic appliances, home security devices (e.g., smart doorbells), over-the-counter medical devices (e.g., smart blood pressure monitors), and "peripherals" (e.g., smart headphones). The guidance does not cover connected/autonomous vehicles, smart meters, or any use of IoT products and services outside of a consumer setting. 

    The guidance seeks to address concerns raised during a consultation last year on smart products, and covers areas such as:

    • whether IoT products are "terminal equipment" for the purposes of PECR
    • how organisations can ensure they are seeking informed consent and providing transparent privacy information (e.g., on wearable devices)
    • whether IoT products process special category data. To this point, the guidance states that "IoT products and services may use special category information, whether directly or by inference; for example, where: the core functionality of your product requires this data; or you intentionally infer it, for example to provide your users with a ‘health score’ reflecting your assessment of their physical health." This helpfully provides some clarification/indication that the ICO is focused on the purpose of the processing.
    • Addresses biometric risks, specifically in the context of an example of a smart speaker: “ If a user asked a voice assistant a query without having voice ID set up for their voice, their voice query is also biometric special category data. This is because the voice query is still processed by the voice assistant to match against any existing voice IDs. So it is at least partly processed for the purpose of uniquely identifying someone."

    As the guidance states that "most processing involving IoT products is likely to result in a high risk", organisations processing personal data involving IoT products (in a consumer setting) should update any existing DPIAs in light of this guidance (for example, there is a focus on processing location data which organisations may need to consider), and ensure that processing activities comply and/or align with this guidance in DPIAs.

    You can respond to the consultation until 7 September via a survey on Citizens Space. The guidance is expected to be finalised in late-2025/early-2026.

    2. ICO launches AI and Biometrics Strategy 

    On 5 June, the ICO published its new AI and biometrics strategy with the Information Commissioner being quoted that "people need to trust their personal information is in safe hands" and the ICO's role is to "scrutinise emerging technologies, such as agentic AI”. The common theme from the Strategy is a focus on three cross-cutting issues: transparency and explainability; bias and discrimination; and rights and redress and the ICO seeks to:

    Promote certainty for use of AI and automated decision-making (ADM) through statute by

    • Setting clear expectations for responsible AI through a statutory code of practice.

    • Consulting on and updating its existing ADM and profiling guidance by Autumn 2025, reflecting reforms to ADM in the Data (Use and Access) Act. 

    Scrutinise foundation AI model developers by:

    • Seeking assurances from developers that personal data used in AI model training is safeguarded, with appropriate controls to prevent misuse or reproduction of sensitive information, including child sexual abuse material.

    • Setting clear regulatory expectations on AI model training (including when using special category data) The ICO also indicates that it will be focusing on taking enforcement action where unlawful AI model training creates risks or harms. 

    Address concerns on the use of ADM in recruitment and public services; setting clear expectations on the responsible use of ADM in recruitment and scrutinising use of ADM in recruitment by major employers and recruitment platforms.

    Ensure fair and proportionate use of facial recognition technology (FRT) by publishing guidance clarifying how police forces can govern and use FRT and engaging the UK Government on proposed changes to the law to ensure any future use of FRT remains proportionate and publicly trusted.

    We encourage any organisation using AI, ADM and biometric processing to familiarise itself with the Strategy (thereby understanding the ICO’s perspective)  and keep a watching brief on outputs that the ICO has promised over the next year: an update to its ADM and profiling guidance, a statutory code of practice on AI and ADM, and a horizon scanning report on the data protection implications of agentic AI. 

    3. Ofcom’s focusses on collaboration and content moderation in AI strategy

    On 6 June 2025, Ofcom published its strategic approach to AI for 2025/2026 setting out how it plans to support the use of AI across the sectors it regulates. One data protection related area of focus is Ofcom’s work on the use of AI-driven content moderation by online platforms. Ofcom noted it will be concentrating on: 

    • The use of AI to understand user behaviours and ages on online platforms – this is particularly relevant to Ofcom’s work on protecting children online (which is also priority topic for the ICO - see our Data Bytes on the ICO's review on the financial sector’s use of children’s data and the ICO's Children's Code Strategy);

    • exploring the deployment of AI in content moderation, including the use of GenAI to generate synthetic content for training moderation tools; and

    • engaging with large online platforms to understand how these platforms are deploying AI for safety/content moderation (e.g., through adding 'friction' into user journeys) and to guide platforms' use of AI for user safety.

    Ofcom notes that it intends to collaborate with other regulators, such as the ICO, through the Digital Regulatory Cooperation Forum (DRCF) to provide regulatory alignment on emerging technologies and encourage safe innovation.  In a similar vein, Ofcom is working with the Alan Turing institute to build a taxonomy for safety technology to help facilitate regulatory alignment and provide clarity for organisations. 

    Organisations seeking to develop or deploy AI-driven content moderation technologies will be at risk of being caught between Ofcom’s push for tackling harmful content at scale and the ICO’s action plan to ensure appropriate safeguards and accountability are embedded in AI and automated decision making activities. It remains to be seen whether Ofcom and the ICO are able through the DRCF to deliver the coordinated regulatory alignment referenced in the report.

    4. 23&Me finally face ICO fine of £2.31 million

    Following a joint investigation with the Canadian data protection regulator, the Office of the Privacy Commissioner of Canada (which has been ongoing since 2023) the ICO has now fined 23andMe, Inc £2.31 million for failing to implement appropriate security measures (for more background, see our previous update on this. This fine is a result of the cyber-attack that 23andMe suffered in 2023 which affected over 155,000 users in the UK and of which health and genetic data was affected.

    Key takeaways from this penalty notice:

    • Implement appropriate authentication and verification measures in users/customers’ customer login process, including, but not limited to, mandatory multi-factor authentication (this is the key measure that the ICO religiously focuses in on when investigating and imposing its enforcement action and continues to point to in its insights reports), appropriate password security policies and procedures, the ability for customers to use unpredictable usernames and the use of other additional controls, such as device, connection or address fingerprinting;

    • Implement an appropriate process for regularly testing and assessing the effectiveness of your technical and organisational security measures, specifically in relation to the threat posed to customers’ personal data by a credential stuffing attack.  Credential stuffing takes advantage of people re-using username and password combinations which is often used by threat actors to gain access.

    • the seriousness of the infringements were further aggravated by 23andMe’s failure to identify the data breach at an earlier stage, despite “multiple indications of anomalous and unauthorised activity by the threat actor”; and “deficiencies in the content of 23andMe’s notifications of the data breach to the ICO.”

    Let this be yet another reminder to talk to your Info Security colleagues about what security measures, in particular multi-factor authentication methods, are implemented at your organisation. For more ammunition about the importance of MFA, please see our previous update in Data Bytes 57.

    5. ICO reprimand police over CCTV failings 

    The ICO issued a reprimand to Greater Manchester Police (GMP) following an investigation into their systemic shortcomings when using and managing CCTV.  

    GMP was found to have deployed cameras beyond their stated policing purposes, retained footage for excessive periods, failed to control third-party access, relied on outdated DPIAs and didn’t provide adequate staff training or public transparency. Whilst this action was directed at the GMP, the findings are highly relevant for all organisations operating CCTV, demonstrating that the routine use of surveillance technology does not lessen the strict data protection obligations of organisations and the following takeaways should be keenly noted:

    • Document your lawful basis: Each CCTV camera should be linked to a specific, documented business purpose (e.g. health and safety, crime prevention). A legitimate interests assessment should be completed and kept up to date.
    • Set and enforce retention periods: Footage should only be kept for as long as necessary—typically 30–60 days for routine surveillance—with automated deletion and audit logs to evidence compliance.
    • Refresh your DPIAs: DPIAs must be updated whenever surveillance expands, new technology is introduced, or risks to individuals increase. This could include adding analytics or moving cameras to sensitive areas.
    • Train staff and inform the public: Regular training for staff handling CCTV is essential, as is clear signage explaining the presence and purpose of surveillance, the identity of the controller, and contact details for data protection queries.

    6. FCA and ICO seek to address industry concerns on constraints to AI adoption 

    On 2 June 2025, the Financial Conduct Authority (FCA) published a new article with the ICO 'Tech, trust and teamwork'.  The article acknowledges and addresses concerns raised by financial firms about both data protection and financial services regulation acting as a constraint on AI adoption across the industry. 

    The FCA and ICO emphasised their frequent cooperation on this topic and noted that the new Digital Regulation Cooperation Forum (DRFC) workplan includes work over the coming year to provide clarity on intersection between consumer duty and data protection as well as data sharing connected to both open finance and tackling fraud.  There is also  a proposal for the DRCF to convene a symposium on digital ID. 

    Both regulators appear to be making the right noises about collaboration, regulatory agility and providing clarity to financial firms. Although these issues are not new, the ICO and FCA appear to be driven to act now by the government-wide push for promoting growth and innovation. The passage of the Data Use and Access Act 2025 last month, which is intended to enable Digital ID services and new smart data schemes, will likely act as a further driver of collaboration between the regulators.

    Updates from the EU

    European Commission starts public consultation on the high-risk AI systems under the AI Act

    On 6 June 2025, the European Commission has initiated a public consultation aimed at reviewing the requirements for high-risk AI systems under the AI Act, following public debate on outstanding issues and the practical implications of classifying and regulating these systems. The Commission will take the feedback into account, including practical examples and use cases for its upcoming guidelines, to help providers and deployers better understand when they need to follow the specific requirements and obligations regarding high-risk AI systems.

    The consultation is open to a broad range of stakeholders, including providers and deployers of high-risk AI systems, businesses and public authorities utilizing such systems, as well as academia, research institutions, civil society, governments, supervisory authorities, and the general public. In addition to addressing technical and regulatory questions, the consultation will also examine the allocation of responsibilities along the AI value chain. The consultation is open for 6 weeks until 18 July 2025. 

    The European Data Protection Board adopts its Guidelines on Article 48 GDPR 

    On 4 June 2025, European Data Protection Board ("EDPB") has adopted its "Guidelines 02/2024 on Article 48 GDPR" clarifying the conditions under which judgments or decisions from third country authorities requiring the transfer or disclosure of personal data may be recognized or enforced within the European Union. The EDPB emphasizes that EU member states can only recognize or enforce such judgments or decisions if they are based on an international agreement, such as a mutual legal assistance treaty (MLAT), in force between the requesting third country and the EU or a Member State. The Guidelines underscore the legal sovereignty of the EU, noting that foreign judgments or decisions cannot automatically have legal effect within the EU.

    The EDPB highlights that a request from a foreign authority does not itself constitute a legal basis or a ground for transfer. It refers to the two-step test: first, ensuring a valid legal basis for processing under Article 6, and second, identifying an appropriate ground for transfer under Chapter V. An international agreement may provide for both a legal basis (under Article 6(1)(c) or 6(1)(e)) and a ground for transfer (under Article 46(2)(a)). In the absence of an international agreement or if the agreement does not meet the requirements under Article 6 para. 1 lit. c or 6 para. 1 lit. e GDPR, controllers and processors may rely on other legal bases. The EDPB stresses that, in this event, controllers and processors can also rely on other well-known transfer mechanisms such as an adequacy decision, appropriate safeguards, or, in limited cases, a derogation under Article 49.

    EU Commission extends EU-UK adequacy decision

    On 24 June 2025 the European Commission has adopted a six-month extension of the UK data adequacy decisions, permitting the continued free flow of personal data from the European Union to the United Kingdom until 27 December 2025. The extension aims to provide the EU Commission sufficient time to undertake a comprehensive assessment of the UK’s revised legal data protection framework, including the recently enacted Data Use and Access Act, to determine whether the UK continues to provide an adequate level of protection for personal data, as required under EU law. During this period, the key safeguards that underpinned the original adequacy decisions in 2021 remain in effect, ensuring ongoing compliance and stability in data transfers.

    Senior EU officials have emphasised that maintaining data flows between the EU and UK is essential for supporting cross-border business and upholding vital partnerships. The decision to extend follows a positive opinion from the European Data Protection Board and approval by EU Member States, underscoring broad institutional support for this interim measure. The Commission will base its final decision on the renewal of the UK adequacy decisions on the outcome of its assessment of the UK’s new data protection regime.

    TikTok challenges DPC's EUR 530 million fine 

    On 27 May 2025, TikTok initiated judicial review proceedings in the Irish High Court to challenge a EUR 530 million fine imposed by the Data Protection Commission ("DPC") for breaches of the GDPR relating to the transfer of European users’ personal data to China. In its decision of 2 May 2025, the DPC found that TikTok had failed to ensure that personal data accessed by staff in China was protected to a standard equivalent to that guaranteed within the European Economic Area. The DPC had also directed TikTok to bring its processing activities into compliance within six months and had announced suspending data transfers to China if compliance will not be achieved.

    Update from Germany 

    German Federal Data Protection Commissioner imposes fines over EUR 45 million on Vodafone 

    On 6 June 2025, the Federal Commissioner for Data Protection and Freedom of Information ("BfDI") announced it has imposed two fines totalling EUR 45 million on Vodafone GmbH for breaches of data protection law. 

    The first fine of EUR 15 million related to Vodafone’s inadequate oversight and monitoring of partner agencies, which had developed fraudulent activities such as creating fictitious contracts and unauthorized contract changes to the detriment of customers. The BfDI's second fine of EUR 30 million related to security deficiencies in the authentication process for the ‘MeinVodafone’ online portal and the Vodafone Hotline which enabled unauthorized third parties to access sensitive eSIM profiles. Additionally, the BfDI has identified vulnerabilities in Vodafone’s distribution systems, issuing a warning for Vodafone's implementation of inadequate technical and organisational measures.

    Organisations relying on data processors in their supply chain should take note of the BfDI’s comments regarding adequate monitoring of data processors. These comments align with similar guidance from the EDPB in its opinion released last year on the roles and responsibilities of data controllers who appoint data processors.

    Updates from France

    CNIL fines SOLOCAL MARKETING SERVICES €900,000 for inadequate consent in digital marketing campaigns

    On May 15, 2025, the French data protection authority, CNIL, imposed a fine of €900,000 on SOLOCAL MARKETING SERVICES. This decision follows an investigation conducted within CNIL’s priority focus on commercial prospecting practices. SOLOCAL, a company specializing in digital marketing, acquires prospect data primarily from data brokers, which it then uses in SMS and email marketing campaigns.

    The CNIL found that SOLOCAL processed these data without obtaining valid consent from the data subjects. The consent mechanisms implemented by the data brokers were deemed misleading: acceptance buttons were prominently displayed, while alternatives allowing individuals to participate without consenting to data usage were poorly visible or sometimes hidden within the text. According to CNIL, this practice failed to secure free, informed, and unambiguous consent as required by the GDPR.

    As the final data controller, SOLOCAL bore the responsibility to ensure that valid consent had been obtained. However, the safeguards it claimed to have in place were manifestly insufficient. Even more concerning, upon discovering that one of its partners could not provide proof of consent, SOLOCAL continued to use the affected data for an additional 17 months before ceasing their processing.
    The CNIL concluded that SOLOCAL committed two serious violations: first, the lack of prior consent for electronic marketing; and second, the failure to demonstrate that data subjects had granted such consent.

    For more details, you can access the decision here (French only). 

    Data brokers under scrutiny: CNIL imposes €80,000 fine on CALOGA for multiple GDPR violations 

    On May 15, 2025, the French Data Protection Authority (CNIL) imposed a fine of €80,000 on the company CALOGA for multiple breaches related to commercial prospecting and data protection. CALOGA acquired prospect data from data brokers, used it to send advertising emails and shared it with third-party partners for similar marketing purposes.

    The CNIL found that the consent mechanisms employed by the data brokers did not ensure that consent was freely and informedly given. Despite being the final data recipient, CALOGA failed to verify the validity of the consent before processing the data and also failed to implement an effective opt-out process.

    Additionally, CALOGA transmitted prospect data to third parties relying on a claimed legitimate interest lawful basis, even though explicit consent was required for such transfers. This practice contravened fundamental GDPR principles regarding data sharing and transparency.

    Finally, the CNIL criticized CALOGA’s data retention practices. The company retained prospect data for up to four years following a simple email opening, without implementing proper data segregation or secure archiving. Such extended retention periods exceeded the limits prescribed under the GDPR.

    This decision highlights the CNIL’s heightened scrutiny over the practices of data brokers and need for careful diligence of all activities in the data processing chain when data brokers are relied on.  

    For more details, you can access the decision here (French only). 

    Ten new sanctions issued by the CNIL in 2025 under the simplified procedure

    Since January 2025, the French Data Protection Authority (CNIL) has issued ten new sanctions under its simplified procedure, totaling €104,000 in fines. These decisions primarily address breaches of data protection obligations, with a particular focus on the monitoring of employees in the workplace.

    Six of the sanctions targeted companies that implemented excessive surveillance measures, such as continuous video monitoring at work premises or the ongoing geolocation of employee vehicles. The CNIL emphasized that such practices violate the data minimization principle, as they are not justified by exceptional security needs and less intrusive alternatives are available to achieve the same objectives.

    Other infringements identified by the CNIL included excessively long data retention periods, failure to properly inform employees about the surveillance tools in use, and a lack of cooperation during regulatory inspections or investigations.

    In addition, the CNIL sanctioned a company for security deficiencies, notably the use of a weak, never-renewed password and the absence of access controls on a video surveillance system, which allowed unauthorized persons to view the footage.

    Finally, a dating website was penalized for failing to report a data breach and for not informing affected users, despite the breach posing a high risk to their rights and freedoms. The CNIL reiterated the obligation to promptly notify the supervisory authority and data subjects of any serious data breaches.

    This series of sanctions highlights CNIL’s ongoing vigilance and serves as a clear warning to organizations regarding their responsibilities in protecting personal data, particularly in employee monitoring and data breach management.

    For more details, you can access the article here (French only). 

    AI systems development: the CNIL issues recommendations on the use of legitimate interest. 

    The CNIL has published new recommendations on the development of artificial intelligence systems, following a public consultation involving businesses, researchers, and legal experts. The objective is to provide legal certainty for stakeholders while ensuring compliance with the GDPR. These recommendations clarify, in particular, the conditions under which legitimate interest may be used as a legal basis, including for data harvesting (web scraping).

    The CNIL recalls that consent is not always required, provided that strong safeguards are implemented, such as the exclusion of certain types of data, clear information for individuals, an effective right to object, pseudonymisation, etc. It also provides concrete examples, such as the improvement of a chatbot using user data, which may be permitted if these requirements are met.

    These recommendations form part of a broader action plan launched in 2023, aimed at aligning AI practices with the GDPR. Further guidance will follow on topics such as the legal status of AI models, system security, and data annotation. In parallel, the CNIL is working with the EDPB and the European Commission to clarify the legal framework at the EU level, particularly in connection with the forthcoming AI Regulation.

    For more details, you can access the practical guide on legitimate interest here (French only).

    Updates from Spain

    AEPD imposed fine of EUR 3,200,000 on Carrefour for personal data breaches arising from credential stuffing

    The Spanish Data Protection Authority ("AEPD") imposed a total fine of €3,200,000 on Centros Comerciales Carrefour, S.A. ("Carrefour") following five personal data breaches they suffered between October 2022 and September 2023 which involved unauthorised access to customer accounts, primarily through credential stuffing attacks. Attackers used previously leaked email/password pairs to gain access to Carrefour’s website and mobile app and therefore the fine was imposed for violation of: 

    • Article 5.1(f) GDPR (principle of integrity and confidentiality) and Article 32 GDPR (failure to implement appropriate technical and organizational security measures) due to the lack of sufficient technical and organisational measures to prevent or mitigate the risk of credential stuffing attacks. The lack of timely implementation of 2FA, failure to act on pen-test recommendations, and its inability to detect abnormal access patterns were cited as evidence of insufficient diligence. The authority stressed that, given Carrefour’s size and the volume of personal data processed, a higher standard of care was required.
    • Article 34 GDPR (failure to properly notify affected individuals) - it was found that notifications sent to affected users after the fourth and fifth breaches did not comply with GDPR requirements. The notifications merely informed users that their passwords had been reset, without explaining the nature of the breach, the data involved, the potential consequences, or providing a contact point for further information. This lack of transparency prevented users from taking appropriate protective measures. Additionally, Carrefour was ordered to notify all 118,895 affected individuals in accordance with Article 34 GDPR within one month of the resolution becoming final. 

    This decision highlights the importance of implementing robust security measures, in particular multi-factor authentication; and of transparently notifying data subjects – both failures that the ICO scrutinise and criticise organisations of, particularly in the recent wave of retail cyber-attacks.

    Updates from Italy

    This month, we wanted to do a particular focus on Italy as there have been two AI chatbot updates which are useful for any company launching GenAI products and in particular for launches in the Italy as the Garante has a track record of wielding hefty enforcement actions on GenAI launches.

    Proceedings launched against DeepSeek for lack of information relating to hallucinations

    The Italian Competition Authority (AGCM) announced that it was launching proceedings against DeepSeek, a Chinese provider of AI, for alleged unfair commercial practices due to its failure to provide users with sufficiently clear, immediate, and intelligible information about the risk of so-called “hallucinations” – i.e., instances where the AI generates inaccurate, misleading, or fabricated information in response to user input.  Whilst DeepSeek acknowledges in its “Terms of Use” that outputs may contain errors or omissions and should not be treated as professional advice, this has been deemed as not sufficient because:

    • The terms are accessible only via a link located at the bottom of the homepage and only in English. 
    • The disclaimer is generic (“AI-generated, for reference only”), appears only in English, and does not appear on key user entry points, such as the homepage, registration, or login pages.

    According to the AGCM, this lack of information about the risk of hallucinations:

    • may affect consumers' ability to make informed commercial decisions (in deciding whether to use DeepSeek's AI services or opt for alternatives offered by competitors); and
    • could mean that consumers wrongly assume that the AI's outputs are fully accurate; this could be detrimental given the broad range of DeepSeek's AI use cases e.g relating to medical advice, financial planning, or legal matters.

    Italian DPA imposes EUR 5 million fine against Luka Inc.

    US based company Luka Inc. operates Replika, a GenAI chatbot designed to serve as a virtual companion, therapist, romantic partner, or mentor for adult users, using text and voice prompts. The chatbot came under intense scrutiny after reports emerged that it had encouraged minors to engage in self-harm which led the Italian DPA, the Garante to investigate and halt processing of personal data belonging to users in Italy because it was found that:

    • it did not have a valid legal basis for processing; 
    • it did not have a compliant/adequate privacy policy because it was only available in English, it failed to clearly articulate the legal basis for data processing; it did not distinguish between different processing purposes such as chatbot interaction versus AI training, it didn’t specify retention periods and failed to make clear that the service was intended exclusively for adults; and
    • had not implemented any effective age verification mechanisms despite its assertion that minors were not permitted to use the service. This was problematic as, initially, there was no mechanism to verify users’ ages at registration or during use, which resulted in the collection and processing of minors’ data. Technical assessments confirmed that even after certain age verification measures were introduced, users could alter their birth dates after registration and circumvent restrictions through methods such as “incognito browsing” - meaning that minors could still access Replika and their data could be processed.

    Luka Inc. Faced a Euro 5 million fine (2% of its global turnover), was ordered to make its data processing activities GDPR compliant and was subject to a separate investigation into its handling of personal data throughout the entire AI system lifecycle (including the development and training of the underlying language model).

    Spotlight on The Data Use and Access Act: An introduction

    The formalities:

    • it doesn’t replace existing UK data laws (i.e the UK GDPR and/or the UK Data Protection Act 2018 and PECR) but it introduces a complex set of amendments that data practitioners need to understand and organisations need to prepare for;

    • most provisions are expected to come into force within two to six months, while some may take up to a year; and 

    • as a result of the Act, the ICO will update existing guidance and introduce new guidance as changes come into effect; the ICO have helpfully published a guidance tracker [Our plans for new and updated guidance | ICO] .

    What didn’t make the final cut of the Act?

    The Act evolved significantly through its passage in Parliament and several proposed reforms were ultimately dropped including:

    • The role of the DPO which was going to be removed from being a statutory requirement, has remained;

    • Simplification and changes to some of the accountability provisions, including obligations to maintain records of processing and privacy impact assessments;

    • Attempts to clarify the definition of personal data. 

    What are the significant data protection changes introduced by the Act?

    • A new statutory right for individuals to complain to controllers where they think there has been an infringement of the UK GDPR. Such complaints need to be facilitated by controllers (e.g. built into privacy notices), acknowledged within 30 days by a controller and responded to without undue delay. 

    • Updates to DSAR handling - an individual is now only entitled to personal data that “a controller is able to provide based on a reasonable and proportionate search”. This codifies current case law;

    • A new definition for scientific research which gives greater certainty to organisations using personal data for commercial scientific research.  This new definition is accompanied by confirmation that data subjects can provide a “broad” consent to scientific research.

    • Introduction of “recognised legitimate interests” for certain processing activities such as national security and defence, and responding to emergencies and safeguarding vulnerable people. Such “recognised legitimate interests” bypass the requirement to undertake a legitimate interest assessment.

    • Additionally, the DUA Act sets out a list of processing activities that "may" be processed under the existing legitimate interests lawful basis. These activities include direct marketing; sharing data within groups of companies for internal administrative purposes; and ensuring the security of network and information systems. An LIA is still required for processing personal data for these purposes. 

    • A new “data protection test” for the Secretary of State to use to assess whether a third country has a standard of data protection “not materially lower” than that in the UK. Whilst this arguable creates disparity between how the UK and EU apply different standards for their respective adequacy decisions and could thereby affect the UK’s EU Adequacy Decision, in practice, the adequacy decision processes are likely to remain aligned.

    • the ICO will be rebranded as the Information Commission, with new powers such as issuing interview notices and requiring expert reports.

    Final thoughts

    While the Data Use and Access Act doesn’t overhaul UK data protection law, it introduces meaningful changes that will shape data protection compliance strategies over the coming year.

    In addition, the Act is not a purely data protection piece of legislation and more broadly covers “data use”. We’ll be releasing weekly episodes to explore the Act’s impact on AI and IP, Smart Data schemes, ICO enforcement, and more so please do stay tuned and subscribe to our Ashurst Data Bytes podcast series so you don’t miss an episode.

    The information provided is not intended to be a comprehensive review of all developments in the law and practice, or to cover all aspects of those referred to.
    Readers should take legal advice before applying it to specific issues or transactions.