Data Bytes 41: Your UK and European Data Privacy update for October 2023
03 November 2023
03 November 2023
Welcome to our October edition of Data Bytes, where the Ashurst UK and European Data Privacy and Cyber Security Team look to summarise the key privacy legal and policy developments of the previous month.
Of particular note this month are two major enforcement actions from the ICO and FCA respectively. The ICO have bared their regulatory teeth in issuing a preliminary enforcement notice against Snap, for potential failures to properly assess privacy risks posed by its Gen AI chatbot 'My AI' in particular for users between the ages of 13 – 17. This is a wake up call for all organisations looking to implement Generative AI and shows that the ICO are unlikely to shy away from enforcing in this area. The privacy professional is on the front-line for ensuring its compliance. See our views and those of our clients, in our spotlight section below, where we provide a summary of discussions held on the role of the privacy professional in the governance of AI at a roundtable we held at Ashurst this month.
The second major enforcement action in the world of data this month, came somewhat surprisingly from the FCA, who have announced it has fined Equifax £11 million fine for breaches of FCA principles, related to a cyber event which affected many millions of UK consumers, 6 years ago. Whilst we have all been cognisant of the ability for FCA regulated entities to be fined by both the FCA and the ICO for the same event, this is the first time that double jeopardy has played out in practice.
On 13 October 2023, the UK Financial Conduct Authority ("FCA") announced it has fined Equifax Ltd ("Equifax") for failing to manage and monitor the security of its consumer data. The fine specifically relates to a cyber incident in 2017 affecting the personal data of 13.8 million UK consumers which Equifax had outsourced to its US parent company for processing.
The FCA concluded that Equifax had breached Principles 3, 6 and 7 of the FCA’s Principles for Businesses. Principle 3 of the FCA’s principles for businesses requires a firm to take reasonable care to organise and control its affairs responsibly and effectively, with adequate risk management systems. Principle 6 requires a firm to pay due regard to the interests of its customers and treat them fairly. When a firm becomes aware of a data breach, it is essential that it promptly notifies affected individuals and informs them of the steps that they can take to protect themselves. Principle 7 requires a firm to pay due regard to the information needs of its clients, and communicate information to them in a way which is clear, fair and not misleading. The UK Information Commissioners Office ("ICO") had previously fined Equifax £500,000 in 2018 in relation to the same incident. There has always been the possibility of an FCA regulated organisation being fined both by the ICO and the FCA due to the overlap in regulatory oversight of the processing of personal data, but this is the first time it has played out in practice. The FCA noted that Equifax was not informed by its US parent of the impact on UK consumers until a few minutes before the incident was publicly announced, and therefore was not adequately prepared to deal with customer complaints. This highlights the importance for organisations to consider group-wide implications of cyber incidents and how this is dealt with in intra-group agreements, policies and procedures.
On 19 September 2023, the UK Government announced it will launch a pilot scheme next year to develop a multi-agency sandbox. Dubbed the "AI and Digital Hub", the new sandbox will bring together different regulators involved in the oversight of AI and digital technologies, providing a single point of contact for business access and support. The sandbox will be run by members of the Digital Regulation Cooperation Forum ("DCRF"), which currently includes the ICO, Ofcom, CMA and FCA. The pilot recognises that there is an increasing range of regulatory regimes that organisations must comply with and that there is a growing need for "joined-up" advice across the regulatory landscape. The DRCF said that all businesses will be able to benefit from the sandbox's support through an accompanying case study archive, which will include anonymised examples of support and other relevant material from across the DRCF that innovators might find useful.
The pilot bears strong resemblance to the ICO's new Innovation Advice Service , which is also designed to increase innovators' confidence in launching innovative products in a safe way, and which also publishes anonymised case study advice. The multi-agency sandbox is expected to launch in the first half of 2024 for a 12-month pilot period. The DRCF said it will be providing more information, including how to apply to the hub, later in 2023.
On 10 October 2023, the Court of Appeal issued a judgement upholding an earlier High Court decision to dismiss a claim against the ICO by Mr Ben Delo regarding a complaint. Mr Delo claimed that the ICO had unlawfully failed to determine his complaint about a subject access request he had made to Wise Payments Limited. The case considered important questions relating to the extent that the ICO was expected to investigate and determine, the merits of each complaint it receives. The Court of Appeal reaffirmed the discretion that the ICO retained in deciding the extent it would need to consider each complaint before determining a course of action. The ICO possesses the discretion to reach a view on a complaint, without determining if an infringement has occurred or not. The ICO has welcomed the court's decision, in particular the view that the ICO acted lawfully in determining the outcome of Mr Delo's complaint. The court's decision reaffirms the ICO's discretion to 'screen' such complaints and manage its caseload. This will likely lead to speedier resolutions of complaints against organisations that are investigated.
On 17 October 2023, a First-tier Tribunal of the UK court overturned an enforcement notice issued by the ICO to the facial recognition company Clearview AI Inc which required the deletion of personal data and payment of a £7.5 million monetary penalty. The notice specifically related to the storage and processing of facial images scraped from the internet and social media platforms which the ICO determined to have infringed the UK GDPR.
In its appeal submissions, Clearview disputed it had infringed the UK GDPR and claimed the ICO had no jurisdiction to issue the notice and penalty. The tribunal determined that although Clearview did carry out data processing related to monitoring the behaviour of people in the UK, the processing was "beyond the material scope" of the UK GDPR. The rationale behind the decision is that the company was deemed to be exempt from the GDPR as it only provided services to non-UK/EU law enforcement or national security bodies and their contractors. UK data protection law provides that acts of foreign governments fall outside its scope for the reason that one government shouldn't bind or control the activities of another sovereign state.
In response to the judgement, the ICO has stated that it will "take stock" of the ruling and would carefully consider next steps. However, the ICO clarified that this judgement does not remove their ability to act against companies based internationally who process data of people in the UK. To this extent, despite the successful appeal, the decision ultimately reinforces that scraping large volumes of publicly available data was an activity to which UK data protection rules could apply, and consequently the tribunal's ruling shouldn't be relied on as granting a blanket permission for such scraping activities generally.
The ICO has issued a preliminary enforcement notice against Snap on 6 October over potential failures to properly assess privacy risks posed by its Gen AI chatbot 'My AI' which is powered by OpenAI's GPT technology. Users between the ages of 13 – 17 were flagged as particularly vulnerable, with My AI processing their personal data through information that was input to the chatbot.
This is a clear demonstration that the ICO is willing to take enforcement action for breaches of data protection law in the context of AI, and is showcasing the ICO's approach to 'agile enforcement' as well as it's desire to focus on protecting the most vulnerable in its enforcement activities. If a final enforcement notice is adopted against Snap, it will be required to stop processing data in connection with 'My AI' – an action which could have significant repercussions.
For other companies launching Gen AI products in the UK, this is a stark warning and reminder to conduct thorough and robust privacy risk assessments. In particular, if the Gen AI product will be available to children, you will need to ensure that you have specifically considered the data protection risks to children and complied with the ICO's Children's Code.
On 3 October, the ICO published the finalised guidance on data protection and monitoring workers (the Guidance). This follows the draft guidance which was released in October 2022 as part of the ICO's consultation on monitoring in light of the rise in home working and advances in technology.
In its final form, the Guidance makes clear use of the concept of 'must', 'should', and 'could' to distinguish measures which are required from those that are expected practice or just examples of compliant approaches. Although the core aspects of the guidance remain unchanged from its draft form, it is now much clearer what the ICO's expectations are when it comes to monitoring in the workplace and the final form Guidance is worth a read for this reason alone. Examples of this within the Guidance include the section on involvement of staff and consultation, noting that employers "should" involve them in both the planning and when carrying out any DPIA. This is often a point which businesses are reluctant to implement and it may be that there will be greater criticism from the ICO where there isn't a clear reason for avoiding worker consultation.
The Guidance has in particular made clearer the requirement that DPIAs are carried out when introducing monitoring which may result in financial loss, such as the employer implementing performance management processes as a result of the data obtained via the monitoring. There is also greater detail in the automated decision making, audio/video recording and biometric data sections of the Guidance. This includes a new example that an employer would have a legitimate interest in tracking the location of a mine worker but not an office worker.
The Guidance retains many practical and worked examples which are specific to workplace monitoring and the published version should be a valuable resource for employers.
On 4 July 2023, the Court of Justice of the European Union ("CJEU") ruled that users must have a free choice to use Meta's online social networks without personalised ads, if necessary for an appropriate fee. Consent would not be considered freely given, if Meta failed to provide a separate consent option for personalised ads. The CJEU's preliminary ruling followed questions raised by the Higher Regional Court of Düsseldorf (OLG Düsseldorf), including whether the German Federal Cartel Office had the competence to deal with matters of data protection law in proceedings it had brought against Meta for processing data related to activities outside Facebook ("off-facebook data") that were linked to Facebook through programming interfaces.
With its ruling, the CJEU has strengthened the competition authorities' competence to intervene on GDPR infringements from a competition law perspective. According to the court, a national competition authority can investigate and sanction infringements under the GDPR as a violation under the regime of Art. 102 TFEU, given that access to data has "become a significant parameter of competition between undertakings in the digital economy." However, the CJEU obliges competition authorities to follow decisions issued by the CJEU, competent national data protection or data protection lead authorities in regard to the same or a similar conduct. National competition authorities must cooperate with the competent data protection authorities where they have doubts as to the scope of assessment carried out by those authorities or the lead supervisory authority or in absence of investigations by those authorities, national competition authorities.
Off-facebook data can be every kind of data, including data about sport activities, friendships information from dating app activities, data linked to political party websites etc.. Accordingly, the CJEU sees particular risks that certain off-facebook data could be considered special categories of data (Art. 9 para. 1 GDPR). In that context, the court deals with the legal justification of processing data "which is manifestly made public by the data subject" (Art. 9 para. 2 lit. e GDPR). The CJEU emphasizes that this legal justification requires to assess whether the user has the choice, based on individual settings and knowledge of the entire situation, to disclose information to the public or to a limited group of persons. Further, with regards to common personal data, the CJEU rejects contract performance as a legal basis for creating personalised ads, in order to offer an online social network service to the user (Art. 6 para. 1 lit. b) GDPR). It also denies legitimate interest (Art. 6 para. 1 lit. f) GDPR) as a legal basis: According to the court, Meta processes and monitors a vast amount of private data of its users "which may give rise to the feeling that his or her (the users) private life is being continuously monitored". Meta "cannot reasonably expect that the operator of the social network will process that user's personal data, without his or her consent, for the purpose of personalised advertising."
Subsequently, the Wall Street Journal has reported in October that Meta plans to charge EU-based users for an ad free mobile version of Instagram or Facebook at USD 14-17 per month, in combination with desktop usage rights for Facebook and Instagram. With this approach, Meta would be following through on the CJEU's preliminary ruling, as well as a three-month ban issued by the Norwegian data protection authority (Datatilsynet) in relation to surveillance-based behavioural advertising by Facebook and Instagram, effective since 4 August 2023.
On 22 September 2023, seven German data protection authorities have published a guidance note on how to deal with Microsoft's standard data processing agreement for the use of "Microsoft 365". The guidance note follows a joint decision of all German data protection authorities (DSK) from November 2022 in which the DSK had determined that Microsoft's standard data processing agreement for "Microsoft 365" (Products and Services Data Protection Addendum, "DPA") did not meet the requirements of Art. 28 para. 3 GDPR. The guidance note identifies and addresses the areas of non-compliance and aims to support controllers in enforcing contractual amendments. In particular, the guidance note covers Microsoft's controllership for data processing in regard to Microsoft's own business purposes, the design of the controller's right of instruction, the implementation of technical and organisation measures, retention periods as well as sub-processors. The guidance note excludes the assessment of international data transfers and does not assess all technical Microsoft 365 functions.
The guidance note stands in context of continuous efforts by German data protection authorities towards Microsoft making them further improve GDPR compliance of their 365 product. By creating specific transparency on the authorities' view for a particular product, they are addressing Microsoft's customer base that, as controllers, have a clear reference document on what they should articulate in contract negotiations, given that customer leverage is commonly limited towards a standardized cloud service. The public debate is open as to whether the articulate expectations are realistic and Microsoft will be able to fulfil them.
On 20 October 2023, the European Commission has published its delegated supplementing Regulation on the regulation (EU) 2022/2065 (Digital Services Act, "DSA"), laying down rules for auditing very large online platforms ("VLOPs") and very large online search engines ("VLOSEs"). Under the DSA, VLOPs and VLOSEs must ensure public accountability, including through transparency reports, privileged access to data for vetted researchers, public repositories of advertisement, risk assessments reports and risk mitigation measures. Further, VLOPs and VLOSEs must undergo an annual independent audit to assess their compliance with the DSA (Art. 37 DSA). The delegated regulation defines quality standards for the auditing organisations, procedural rules and methodology, including parameter for a risk analysis.
On 24 October 2023, the EDPS has published its opinion on the Artificial Intelligence Act ("AI Act"). The EDPS stresses the relevance of prohibiting such AI systems that pose unacceptable risks to individuals and their fundamental rights, including AI systems for social scoring by governments, automated recognition of human features, and other behavioural signals in public spaces and categorisation of individuals based on their biometric features. The EDPS welcomes that the AI Act designates it as a notified body and market surveillance authority to assess the conformity of high-risk AI systems that are developed or deployed by EUIs – due to its experience with enforcing fundamental rights – and the establishment of the European Artificial Intelligence Office ("AI Office") which will centralise enforcement activities of competent national authorities and harmonise the application of the AI Act. In this context, the EDPS calls the European Commission, Council of the European Union and the European Parliament to attribute voting rights to the EDPS as a full member of the AI Office. It also considers that affected individuals should have the right to lodge a complaint with a competent national authority and the EDPS itself in cases of infringements of the AI regulation. As a matter of strengthening confidence in supervisory activities, the EDPS recommends that national data protection authorities shall be the designated supervisory authorities under the AI Act, given that they may already have specific expertise in assessing compliance of AI systems from a GDPR perspective.
Earlier this month, we hosted, in London, our annual data protection roundtable in collaboration with Ashurst Risk Advisory. This year's event focussed on the role of the privacy professional in the governance of AI and it was a packed room with clients in attendance from the financial, insurance, technology and consumables industries.
There was lively debate covering a range of topics relating to managing AI risks and finding solutions to help business unlock the value of AI in a compliant manner.
From a regulatory perspective, conversation was focused on the current divergence between the UK governments "non-statutory framework", as outlined in their white paper in March 2023, and the EU's proposed AI Act. Alexander Duisberg from our Munich office, provided attendees with a view from continental Europe and a number of attendees noted they were in favour of the UK developing a cross sectoral piece of AI regulation aligned with the legislative direction being taken in the EU.
It was highlighted that time is running out for the current UK government to adopt any specific AI legislation given that the final King's speech planned before the next general election will take place in November. Attendees were also interested to learn about a new cross regulatory AI pilot scheme spear headed by the Digital Regulation Cooperation Forum (further details above).
The majority of attendees noted that they were being asked to take part in an increasing volume of data protection reviews involving the use of AI by their organisations. The debate concentrated on how existing data protection principles, as codified in the GDPR, interrelate with principles underpinning the UK government's non-statutory framework for AI. Another topic of discussion was whether learnings from developing data protection compliance programs could be leveraged to address AI governance needs. Matt Worsfold from our Risk Advisory Team, led this part of the conversation and provided insights on the key components of implementing AI governance.
A number of attendees explained that they had already started to develop formal AI governance frameworks, such as establishing cross functional councils, working groups and specific in-take forms for assessing the risks and impacts of AI initiatives. It was noted that the involvement of stakeholders in technology, security, legal and compliance roles is integral to the successful implementation of these governance structures which has strong parallels to how many attendees already manage privacy risks in their organisations.
Our next data protection event in London, will be our annual data protection round-up which is scheduled to take place in February. Please register your interest in attending here.
Authors: Rhiannon Webster, Partner; Andreas Mauroschat, Partner; Alexander Duisberg, Partner; Shehana Cameron-Perera, Senior Associate; Tom Brookes, Associate; David Plischka, Associate; Nilesh Ray, Solicitor; Prithivi Venkatesh, Trainee Solicitor.