Podcasts

Ashurst Data Bytes 3: AI Implications in the UK's Data (Use and Access) Act

31 July 2025

Ashurst partner Rhiannon Webster has assembled an expert team to unpack what the Data (Use and Access) Act means for AI, including data protection and IP-related matters. This includes Rhiannon’s Digital Economy team colleagues Will Barrow and Tom Brookes alongside Ashurst IP expert Aaron Cole.

Together, they explain what’s changed and what lies ahead for organisations developing and deploying AI. Aaron lays out the copyright policy options the government is weighing up in the coming months, and highlights pertinent court cases over how copyright protected materials are used by AI platforms.

Tom summarises data protection changes relating to automated decision-making and AI. He explains why this could prompt organisations to expand the way they use such AI tools, and what the impacts might be of rules splintering between the EU and UK. Tom also describes the more permissive environment for taking automated decisions using personal data, which has implications for HR and Finance leaders in particular.

To listen to this and subscribe to future Data Bytes episodes, search for “Ashurst Legal Outlook” on Apple Podcasts, Spotify, or your favourite podcast player. To explore more from Ashurst’s podcast library, visit ashurst.com/podcasts.

The information provided is not intended to be a comprehensive review of all developments in the law and practice, or to cover all aspects of those referred to. Listeners should take legal advice before applying it to specific issues or transactions.

Transcript

Rhiannon Webster:

Hello, I am Rhiannon Webster, a partner in Ashurst's Digital Economy team and head of our UK data and cyber practice. Welcome to the next in our series of our new Ashurst Data Bytes podcast, the latest spin-off from our monthly bulletin where we consolidate the latest data and cyber breach developments for you in bite sizes. We are using this podcast series to cut the UK's new data law, the Data (Use and Access) Act, into bite-sized chunks.

For this episode of the series, I'm handing over the reins to Aaron Cole, counsel in our IP practice, and Will Barrow and Tom Brookes, who are senior associates in our Digital Economy team. They will be delving into the AI-related implications of the Act with a specific focus on data protection and IP-related points. Over to Will to kick off the conversation.

Will Barrow:

Thanks, Rhiannon. So, the Data (Use and Access) Act, what's changed and what lies ahead for AI? Aaron, as the IP guru in the team, what from your perspective, are the key changes you've picked up on and what's going to impact organisations developing or deploying AI systems following the Act?

Aaron Cole:

Thanks, Will. I think the main takeaway is not much. Well, not much, not yet anyway.

If you've been paying attention to the genesis or the travel of the Bill through Parliament, you'll have heard commentators talk about the ping-ponging between the House of Lords and the House of Commons. And this was in part due to a failure to get any material agreement on how to treat the mining or the scraping of content from the internet, from a copyright perspective, how to treat that in view of AI mining.

There were some big ticket names that weighed into the conversation. You might've seen that Elton John, Dua Lipa, Sir Paul McCartney, all joined the conversation, really calling for there to be rights for content creators and to be able to prevent AI systems from just scraping en masse their content and being able to reproduce it or being trained on it.

At the other end, we saw AI developers, platforms, wanting to be able to continue business and kind of just continue as per status quo.

The result of this ping-ponging has been really just one big kicking of the can down the road. And so, what's been retained is an obligation on the Government to produce an economic impact report. So, what it's required to do in nine months time is to produce a report which considers the economic impact of each of the four policy options, which were produced in a recent consultation paper on copyright and AI.

So, those four options include:

  • just do nothing, leave the UK copyright laws unchanged;
  • strengthen copyright requirements, in particular licencing, which would create an obligation on AI systems to obtain licences from the copyright owners;
  •  introduce broad data mining exceptions, which would effectively allow AI systems fair game to go in and use whatever content they can find on the internet to train their systems on; or
  • to introduce limited exceptions which reserve some rights for intellectual property holders, which might have included an opt-in/opt-out system. And then, introducing greater rules around the transparency of how content or copyright materials are used by AI systems.

So, it's really just strap yourself in, wait another nine months to see where we land on this. In the meantime, questions are being raised. It's a real material issue for businesses as to how copyright protected materials are being used by AI platforms. And we are seeing this play out currently in the Courts. We have Stability AI and Getty Images, they've recently finished their 18-day High Court trial. And so, this is an issue that's affecting businesses currently on a day-to-day basis.

Will Barrow:

Interesting. Really interesting. Tom, has Elton put on his data subject hat and is he equally up in arms about the use of his personal data and how that may be expanded by the Data (Use and Access) Act?

Tom Brookes:

I mean, there's no can kicking, as Aaron summed it up in relation to IP. So, we've got some changes here from a data protection perspective in connection with AI and it's all focused on the concept of automated decision-making.

In effect, this concerns decisions which are made without any human intervention or any involvement of humans. And in the context of AI, there's a wide range of decisions which could fall into that category of automated decisions. The rules that are currently in place before the Act was passed are inherited from the EU. So, under the EU GDPR, under Article 22, there was, in effect, a prohibition on automated decisions which have got a legal or similarly significant effect unless one of three conditions was met. So, one is you've got the explicit consent of the individual concerned. The second is that there was a contract between the organisation who's controlling our automated decision and the individual concerned. And the third one is that it was authorised by law. So, quite a narrow set of parameters to be able to make these automated decisions with a legal or similarly significant effect.

Now, what's happened under the Data (Use and Access) Act is that the automated decisions where those restrictions apply has been narrowed. So, now under the Act, those limitations will only apply to what's called "special category data". This is the most sensitive data, data concerning health, data concerning ethnicity. And so, if you are making an automated decision with a significant effect and it involves special category data, then the same restrictions that used to apply will apply to you.

However, the key is, is that if you're not relying on special category data, not using special category data, then you can use any of the lawful bases under the GDPR in order to make that automated decision. And that will also include legitimate interests, which is a bit of a game changer really because that's the most permissive of the lawful bases organisations can rely on. And where this comes out in practice will be in areas such as HR and Finance, where automated decisions and the use of AI technology could potentially be expanded by this change, albeit only if they're not relying on and using special category data. But that's a change nonetheless.

Will Barrow:

Tom, do you think in practice that means there's going to be a significant expansion of the way organisations can use automated decision-making AI tools? And are there any, kind of, existing safeguards that we're used to that are going to apply notwithstanding this expansion?

Tom Brookes:

Yeah. So, I think I'll probably answer that in the reverse, in that starting with the safeguards, yes there are. So, even if the decision has a significant effect and it doesn't involve special category data use, you still have some guardrails. Those guardrails are around transparency, so letting people know about the fact that that automated decision is happening. And also, there's some elements of control as well. So, rights of individuals to request for human review of that decision, and rights for contesting the results or outcome of that decision as well. So, the safeguards are there.

To your first point around is it going to open the floodgates and create a much more permissive environment? It's an interesting question and we're going to have to wait and see. The ICO seems to think that might be the possibility. On their website, they've listed a summary of the key provisions in the Data (Use and Access) Act, which they believe may stimulate innovation. So, they've listed that as this change in relation to automated decision-making as one of those points which could do that.

Practically speaking though, in reality, how these systems are being developed and deployed, it's going to be a bit of a challenge, I think. And I think the main reason for that is that these rules, we've now got a splintering between the rules which are in place in the EU, which remain as we described it earlier on, unchanged, and what's happened in the UK. So, even if you're slightly more permissive in the UK, you are more likely to be deploying an AI system or developing an AI system which applies across both markets. It'll be quite expensive to invest in two different systems or two different ways of making decisions across these two very closely linked markets.

Will Barrow:

Yeah. That makes complete sense. And obviously you have the overlay of other associated regulations. Obviously, in the EU you have the EU AI Act, which is less about individual's fundamental data protection rights and more about really product safety of AI, but they're broadly addressing the same thing, right? And if you're rolling out a high-risk AI system in the EU, you are going to be subject to obligations of ensuring the system is subject to human oversight, ensuring it operates transparently. In some cases, ensuring that people that are subject to decisions made by that AI system can get information about how that decision was made. And there's some direct interplay with data protection law there.

So, it's kind of interesting to see how these running themes are coming through, but I kind of agree with you in the practical sense it feels like it is more permissive, but it may not actually end up changing that much, right? Because you still have this whole patchwork of interrelating regulations that may apply in different areas, but ultimately, if you're a global company are going to apply to you and your deployment of AI systems and you're going to have to think about them all.

And ultimately, if you are rolling out an automated decision-making AI tool in the UK that's now subject to these slightly more permissive requirements, you are still going to want the ability to call from the provider of the system above you, information about how the system's operated. So, if a decision is contested, right? You can then explain to the data subject how that decision has been arrived at. And you are going to need those things to make sure that you can discharge your obligations under the EU AI Act as well. So, there's a lot of overlap and a lot of the same concerns playing into the same things, which mean, I think the way governance frameworks and contracts are going to look is probably not going to be materially shifted, although maybe that's not quite right.

Tom Brookes:

No, I think you are right. And I think that if you play it through, if you are deploying an AI system and it's your customer's data who's being pushed through that system, you're the data controller, you've now got these obligations to give transparency, to give control. In order to do that, you are then relying on the developers to give you oversight and understanding of how that decision's been made. So, from that governance perspective, there's clear overlaps then between what's being driven at under these legal changes and particularly the safeguards and what we are seeing and playing out under the EU AI Act.

Practically speaking, you want the same thing, an understanding of how these decisions are made, so that you can either assess your obligations under the EU AI Act and comply with them, or in order to make people aware of what's going on in the context of data protection law under the Data (Use and Access) Act. So, there's definitely parallels there and the same information's needed.

Will Barrow:

So, Aaron, do you have a sense of where the Government might land with its economic impact report?

Aaron Cole:

Well, there are currently four proposed options. They can firstly do nothing and leave UK copyright laws unchanged. Secondly, they could strengthen copyright licencing requirements across the board. Thirdly, they could introduce broad data mining exceptions. Or fourthly, they could introduce a data mining exception, which allows right holders to reserve their rights. And all of this is expected to be underpinned by supporting measures on transparency.

The government's already indicated its preference is to introduce a right holders opt-out type model. The AI sector would, no doubt, like broad data mining exceptions, and the creative sector would want stricter licencing requirements. So, there's four options, two quite powerful sectors with diverging interests, and the Government trying to find a balance between them. At this stage where we land is still an open question.

Will Barrow:

Thanks, Aaron. Any thoughts and key takeaways from an IP perspective?

Aaron Cole:

Yeah, sure. I'd say the key takeaway is no changes just yet. I think we need to stay tuned for the economic impact report in nine months' time.

Will Barrow:

Tom, what about you as a DP guru?

Tom Brookes:

I think it comes down to this point that the changes, at least on face value, look like there's going to be a more permissive environment for taking automated decisions involving personal data. But with that is likely to come, I think, more scrutiny from the regulator. This isn't traditionally an area that the UK ICO has looked at terribly closely. There has been a new case law on this fairly recently about automated decisions and it was quite restrictive.

I think that organisations are going to have to, in light of that point about the regulator being focused in this area, we know the guidance is coming, we know the statutory codes are coming. It's going to be top of mind for them. They're likely to be wanting to understand, particularly for data-heavy industries and companies, what decisions are they making which are automated? Do they have a view on whether they're significant? How has that been assessed? And it may be that organisations who are active in this space are going to have to rethink and look back at some of that analysis. If they've done data protection impact assessments, legitimate interest assessments are going to be coming down the track now you've got these alternative lawful bases to rely on.

All of that has to be considered, and it's drawing upon knowledge from different team members that a legal team looking at and working with the technology team to really truly understand how these decisions are being made, and then applying that to the law and applying that to the guidance which is coming down the track.

Will Barrow:

I think that just about wraps up the conversation for today. Thank you, Tom. Thank you, Aaron. It's just incredibly interesting points. And we could get down into the detail of this and go on for hours, honestly. And I know we have in separate conversations, in the nerdiest way possible. So, Rhiannon, what are your takeaways from this?

Rhiannon Webster:

I think my main takeaway is I've never known so many references to Dua Lipa, both in what you were saying about her intervention in Parliament, but also, I don't know if you know that Dua Lipa became the nickname for the Bill as it was passing through Parliament. When I say the nickname, it was apparently the nickname in the ICO staff and amongst data protection practicioners. Who knew Dua Lipa would have such significance in the data protection world?

Will Barrow:

Maybe she knew that and that's why she decided to wade into the IP debate.

Rhiannon Webster:

Yeah. She's like, "This is my law."

Will Barrow:

Exactly.

Rhiannon Webster:

Will, Tom, Aaron, thank you so much for joining me and providing such practical insights. And thank you for listening to our podcast. Please do share the podcast with interested colleagues and look out for the upcoming podcasts on the Act, which we're aiming to release every week.

Keep up to date

Listen to our podcasts on Apple Podcasts or Spotify, so you can take us on the go. Sign up to receive the latest legal developments, insights and news from Ashurst.

The information provided is not intended to be a comprehensive review of all developments in the law and practice, or to cover all aspects of those referred to. Listeners should take legal advice before applying it to specific issues or transactions.