Podcasts

Governance & Compliance 10: Agentic AI rockets to the top of board agendas. Now what?

09 March 2026

In this episode of our UK Governance & Compliance mini-series, host Will Chalk is joined by two experts on AI governance, Ashurst’s Fiona Ghosh and Matt Worsfold. Drawing on Ashurst’s recent Board Priorities 2026, they peel back the layers of AI hype and market complexity to get to the heart of the matter: what do directors need to know and – most importantly – what do they need to do?

In less than 25 minutes, the trio explain the story so far (AI governance in the past three years) and confront today’s great challenge: ensuring frameworks are genuinely embedded and operationalised. They also discuss contrasting regulatory approaches (including mixed fortunes for the EU’s ambitious AI Act and the UK’s more light-touch approach) and the common thread across jurisdictions: uncertainty.

Our panel also gets practical – strongly recommending hands-on AI literacy for directors. As Fiona points out, "Until you are actually grappling with the topic yourself, you can't possibly understand some of the issues and some of the risks that are presented.”

The discussion also explores the changing remit of non-executive directors and the widening breadth of skills they now require to be effective in their roles. And our experts highlight relevant guidance available for directors who want to get on the front foot.

To listen to this and subscribe to future episodes in our governance mini-series, search for “Ashurst Legal Outlook” on Apple Podcasts, Spotify or your favourite podcast player. You can also find out more about the full range of Ashurst podcasts at ashurst.com/podcasts.

To receive updates and alerts on the issues raised in this podcast mini-series, subscribe to Ashurst’s regular Governance and Compliance Updates.

The information provided is not intended to be a comprehensive review of all developments in the law and practice, or to cover all aspects of those referred to. Listeners should take legal advice before applying it to specific issues or transactions.

Transcript

Will Chalk:

Hello, and welcome to Ashurst Legal Outlook, the latest in our series of governance and compliance-focused podcasts.

My name's Will Chalk, and I'm a Partner in Ashurst's Corporate Transactions practise, focusing on governance. You are listening to a special series tackling our view of the top risk-related priorities for boards in 2026.

In each episode, we explore a major risk, trend, or opportunity commanding attention when setting board agendas this year.

Our next topic covers the issue of AI and Data Governance, and how boards should approach it this year. To help unpick this issue as much as one can in what's supposed to be a relatively short podcast, we've got two of the principal authors of our Artificial Intelligence Board Priority. Specifically, we've got Fiona Ghosh, a Partner in our digital economy transactions practise in London, who specialises in deep tech, which sounds very cool, including AI, payments, fintech, and strategic collaboration arrangements. We've also got Matt Worsfold, a Partner in our Risk Advisory consultancy in London, one of whose chosen specialist subjects is AI and Data Governance. Some of you will remember Matt is also pretty hot on all things cyber too.

Okay, let's jump in. So, AI has been around for some time, but since OpenAI launched ChatGPT in 2022, it has, let's be honest, shifted the business landscape for good (or for good or ill, depending on your perspective). We'll come onto the regulatory landscape shortly, but Matt, perhaps you can kick us off by bringing us up to speed or up to date on the latest general developments in this area.

Matt Worsfold:

Yeah, thanks, Will, and great to be joining you and Fiona on the podcast today.

So, where are we at with AI in early 2026? I think over the past few months, we've seen a maturing of generative AI tools and the capabilities of large language models. And I think what we're seeing is an improvement overall in things like accuracy, consistency of outputs and reasoning, but largely still a bit of a way to go. Partly this has been driven by an improvement in the capabilities around things like retrieval augmented generation, which sounds very complicated, people call it 'RAG' for short, but that is essentially connecting specific and often proprietary internal data assets and data sets to large language models, which, as I say, improves that accuracy and relevance of responses that people are getting. So, starting to make those AI tools work a bit better for the particular context or use case that they're operating within.

One of the other trends we're seeing is the emergence of what we call 'context-specific AI tools'. These are often built upon some of those large language models or generative AI tools that we're all very familiar with, your Claude's, your Gemini's, amongst others. And again, that's about using those LLMs and training them or fine-tuning them for a very specific context. And that might be a particular sector, it might be for a particular firm, or similar and again, that's increasing accuracy and relevance of those tools.

From a corporate and business standpoint, I think we saw this last year, but again, a big focus on big enterprise rollouts of these generative AI tools to all parts of the business. And the attempts there really have been focusing around things like efficiency gains and trying to really find those use cases that are going to drive the most amount of value, and that could be efficiency gains. But really, a lot of firms still struggling to see where that return on investment is going to come from. I think it will come at some point, but people are still figuring out, what are these tools good at, what are they not so good at, and how can I drive really valuable use cases?

And then it wouldn't be a podcast on AI if I didn't mention the emergence of Agentic AI, which is probably the most talked about term at the moment in this field. And that again, has shifted the capabilities even further, in terms of what AI can do. And that is about shifting and giving AI tools a bit more, well, a lot more, autonomy. And really shifting that focus from outputs of a large language model once you've prompted it to action. So AI tools being able to reason, understand input, and then take action off the back of it, which again, we'll talk I think a bit more about today.

Will Chalk:

And how are you seeing this influence board behaviour, Matt?

Matt Worsfold:

Yeah, I think boards in general, like a lot of us, have been struggling to keep up with the pace of technology change. And again, I talked about Agentic AI, that's just making this even harder.

Understanding what these tools can do and how they work. We're probably going to see, again, a shift into new areas in AI, there'll be new capabilities that emerge. And so again, it's just keeping on top of what's changing and why and what does it mean.

But really, I think boards are having to grapple with two overarching challenges. So the first is how AI is disrupting the sector that they operate within, and then formulating a plan as to how they respond as boards and how their businesses then respond to that challenge. And that could be anything from, okay, well what are we doing as an organisation to leverage AI, to transform the business, to simplify, to become more digital focused? And then as I was saying, looking at the rest of the market and going, right, well, how might this just disrupt our place in the market and what are we doing to respond to that? So, that's the first challenge.

And then the second is a bit more internal focus, which is, well, how as a board do we help manage these risks more effectively? What type of oversight is required? What are the risks to our business based on what we do? What are the types of questions we should be asking management, and what does good look like in their responses? So, there's a lot happening at that board level in terms of asking some of these questions around, okay, well, how do we respond to really a generational shift in the way that a lot of people are going to do business in the future?

Will Chalk:

And Fiona, rather like cybersecurity, this is obviously a global issue, but it appears one with really divergent jurisdictional responses to it. And also like cybersecurity, it feels like the law is lagging behind the technological reality. Where's the regulatory response at the moment in your view?

Fiona Ghosh:

It's actually really, really interesting, because I think regulators globally are damned if they do and damned if they don't. And I think that's a very different picture to almost anything that we have seen before. And we know that regulators always get caught between not wanting to be over prescriptive but seeming to be hands-off. And here what's happened is, is we've had some specific sectors and some specific regions take a hands-off approach. And obviously in the UK, there's a very classic example of that in terms of the UK government's approach in a 'Trump geopolitical environment'.

And then we can contrast that with China, and we can contrast that most famously with the EU, who went all guns blazing several years ago on almost wanting to set out an equivalent gold standard for AI regulation that I think they looked at GDPR and looked at that roadmap and said, "Well, we set the gold global standard for GDPR. We can now do the same for artificial intelligence. And you know what? Because we are setting down these clear prescriptive rules for AI classification, risk, use, deployment, then that will garner investment and innovation." Something that the EU traditionally is not famous for. And whilst the plan in theory was great as a large economic zone to do that, it's been wrong-footed at a regulatory level by the geopolitics and actually not just the political movement, but also the technological shift and the pace of shift. So therefore, even where regulators were on the front foot, they have been caught out.

Now in contrast to that, the UK has taken an approach where it's essentially said, the UK's AI action plan, "We have regulations, we have loads of them, and what we want to do is call upon individual sectoral regulators to set out the rules for that and also look to existing rules to manage AI systems. And what will that do? Well, funnily enough, that will also garner investment and growth and innovation." Now, we've had a Treasury Select committee in the last few weeks, heavily criticise in particular the FCA for that approach, for not being proactive enough and not giving enough guidance to firms. And as a consequence of that, the FCA, maybe it's not direct, maybe it's just very good timing, but it was very good timing, have proposed the Mills Review to look at the roadmap of future regulation and future innovation, future pace of technology, and for Sheldon Mills to report back to the FCA board.

So, I think it's a really good example of where you're damned if you do and you're damned if you don't, but the real actual point here for boards is that it creates, whichever side of the fence you sit on, uncertainty. And what we are seeing in our day-to-day work is boards feeling a very strong imperative and pressure from investors, shareholders, competitors to be seen to be on the front foot, but not knowing what is the right first step to place there, because of this regulatory indecisiveness.

Will Chalk:

And that leads us on nicely to some of the practicalities in that case. Matt, so the commercial response to the challenge, an AI governance framework, as we've discussed in the past comes in two phases: development and implementation. Take us through the development imperative first, if you would.

Matt Worsfold:

Yeah, I think I'd characterise AI governance as a kind of a story over the last three years. So, 2024 was very much the year of the policy, 2025 was year of the framework, and 2026 is going to be the year of operationalizing or implementing that framework.

And when I talk about that story, it really originates from the fact that when most businesses got their hands on these tools, the response was very much about developing and releasing what I would call high level policies. Usually, things like acceptable use policies, very much the dos and don'ts. And those are things like, don't put confidential information into public tools, or don't put personal information into any of these tools. And what most businesses found quite quickly is that this just wasn't sufficient for adequately governing and managing the risks of these tools.

And so last year, we saw the development of really what I would call more 'holistic AI frameworks'. And those frameworks then being integrated into things like enterprise policies and procedures, being linked back to overarching strategies. And those frameworks in particular, and this won't be news to a lot of our listeners who have been very focused around enterprise-wide governance, these frameworks were focusing on things like decision and risk ownership, things like escalation mechanisms if you think about the escalation of risks, standardising risk assessments and setting the parameters for those, giving guidance around controls across the AI life cycle, thinking about things like incident management. So, it's the core things you would expect to see in a governance framework but very much tailored to the way that AI operates, how it works, the risks that it presents. And really now what we're seeing in terms of the development of these frameworks is that these should all hang from very central constructs in order to be implemented effectively.

Fiona obviously just talked about what's happening globally from a regulatory standpoint. I think in the absence from a governance perspective, in absence of that kind of clear steer globally or at least a harmonisation globally, a lot of businesses are now looking to things like the ISO standards on AI, looking to things like the NIST AI risk management framework. Again, these are globally recognised frameworks. And then when necessary, trying to align those to sector or jurisdictionally specific pieces of regulation.

So, it's definitely emerging. These frameworks I think are maturing the way in which AI is governed and the particularities, the nuances that AI presents are now being catered for, but there's been some interesting developments.

I think just in terms of what's been happening recently, the last three months, we've seen some businesses starting to obtain their ISO certifications and using that as a real kind of stamp of, okay, this is what we're doing on AI, this is how we're risk managing, this is how we're governing, as a bit of a certification almost. And then again, other developments we talked a bit about Agentic AI, and that's really now starting to stress test some of these frameworks, often which are newly created or newly implemented. Again, so more stresses in terms of how Agentic AI is therefore stressing some of those governance frameworks and arrangements.

Will Chalk:

So, you've started to develop the sort of a crucial implementation phase. I mean, developing the framework is difficult enough, but just talk a bit more if you would, about the obvious necessity of oversight of that framework and ensuring it's being followed. Because policies and frameworks are all very well, but unless they're properly operationalized, they're for the birds, really.

Matt Worsfold:

Yeah, absolutely, absolutely. I mean, step one actually is, is making those frameworks operational. So, do you have the necessary procedures that underpin each element of the framework and allow for the consistent implementation of that framework? And we talked a bit about risk assessments, that could be procedures around how you test and evaluate AI systems, so setting performance criteria and monitoring against them, how you identify and monitor emerging risks. So, getting those procedures right is absolutely key. And as I say, standardising the way that that framework's being adopted.

But then from an oversight perspective, and again, thinking from a board standpoint, it's how do you integrate this AI governance framework and its implementation into existing governance frameworks and arrangements within the organisation, and assuring that they align to those enterprise governance frameworks?

So again, these are things like making sure that there are clear and dedicated oversight bodies or committees, and that those committees are feeding risks and monitoring outcomes, audit outcomes into, as I say, existing enterprise governance structures so that risks are being escalated and articulated effectively all the way up to the board, so that the board has a really clear understanding as to what the organisation is grappling with it. And then importantly, how they're dealing with those risks and how they're being monitored on an ongoing basis.

And it's about creating that level of transparency and visibility to the board in terms of, as I say, where AI is being used, the material risks that are associated. And again, as I say, how those risks are being mitigated.

Will Chalk:

Fiona, we've touched on this already, boards are understandably struggling to keep up. What in your view should be their starting point?

Fiona Ghosh:

We always advise boards to start off with use case examples, first of all. Principally because it actually puts some flesh on the bones of otherwise a lot of theory and a lot of quite dry governance contexts.

I think there's been a lot of interest recently with AI literacy, particularly at board level, as well as then flowing down to user level. I think what's interesting there is that literacy in theory is given, but actually, there is nothing like, we all know this, once you actually get stuck into using the tool, and actually the number of board members that have used generative AI is still pretty low. And I can't tell whether this is just generational or whether this is one of seniority or a bit of both. But actually when you talk to them, this is why use cases are so important, because until you are actually grappling with the topic yourself, you can't possibly understand some of the issues and some of the risks that are presented. And we all know this, right? Because we'll know it from our use of social media, we'll know it from our use of the internet, we'll know it from our use of any new technology. That once you are using it yourself, you then start to appreciate the risks.

So, this is why we always come back to the use case and the use case deployment. And even something very topical, like we have board minutes that are now recorded by AI agents. And it's really interesting to see boards and their reaction to that, and their understanding of that and their curiosity around that.

Will Chalk:

And concerns about the safety of that as well.

Fiona Ghosh:

And the accuracy, yeah. So all sorts of things, but they're great examples of the much wider risks and the much wider stakeholder input that needs to happen with any AI deployment.

I think when you move into more regulated board environments, obviously, the importance of keeping up and keeping knowledgeable has a completely new light to it, because in the UK we will have things like SM&CR and we'll have all sorts of other requirements on us from an operational resilience perspective, from fit and proper purpose, etc, etc. And that then flows down into Europe under MiFID rules, etc, etc. Those boards, again are under enormous pressure to upskill quickly, and we do a lot of work in bringing those board members up to speed, not only with the need to understand technological risk, but why it is important from a financial services regulatory authority perspective and a supervisory perspective.

So, there's a lot for board members to take on right now, and we always find that if you can put it in a factual scenario, it is a much more tangible thing to get to grips with.

Will Chalk:

I mean, I do wonder, listening to both of you talk, particularly within the regulated sector, whether we are reaching a tipping point on what is realistic to expect of non-executive directors when it comes to getting up to speed and keeping across all of this, as well as the residual business model.

Fiona Ghosh:

Yeah, I think that's a really, really, really good point actually, because I think the position of NEDs probably hasn't come under the level of scrutiny that it will do. Because I absolutely do foresee a time, and it's in the medium term, it is definitely not far away, where I can imagine a board agenda list will incorporate transformational IT as a large part of what it's having to grapple with, because transformational IT will bleed into not only operations, logistics, front office, middle office, back office, it will bleed into people and culture, it will bleed into digital transformation, it will bleed into risk. So, there almost isn't anywhere [it won't bleed into]. I think we are going to see a need and a drive for NEDs to come equipped with that new set of skills. And it's a salutary kind of signpost out to NEDs of the direction of travel.

Will Chalk:

And I'm almost loath to ask, what other tech considerations should be on a board agenda, from your perspective?

Fiona Ghosh:

From my perspective, it will always be around the wider transformation journey that AI will bring us on, because there's a lot of discussion at the moment about AI almost for the sake of AI, right? So, have we brought on Copilot, the Harvey discussion within law firms? It is AI for the sake of AI, and it's almost like we're still in the playground figuring out how to play with these tools, which we are. So from a legal perspective, that's very interesting for us as lawyers, because if we're only at the beginning of the journey of playing with these new toys, we haven't even started to unwrap the risks that are going to be presented when we are fully in the driving seat with them. And that is exactly the same for boards.

But the other thing that this will bring with it is when we're not talking about the AI, because the AI is just technology, then how do boards grapple with that? How do they understand the transformed product and the transformed culture and the risks that that will bring for boards? And so, that is a wider technological consideration because the technology can only operate well in the environment that it's put in. So, how do boards manage that wider ecosystem?

Will Chalk:

I think this podcast needs to come with a few trigger warnings at the beginning of it. Matt, a final reflection from you, if that's okay?

Matt Worsfold:

Yeah, absolutely. Probably a bit of a summary as to what we've spoken about today.

And to lead in from what Fiona was just saying there, the technology itself and the way in which that technology is being adopted and governed and risk managed is evolving at some pace, and has been for a while. And if you mix that in with the regulatory uncertainty that we've spoken about, that really only exacerbates a lot of the challenges that a lot of board members are facing into, in terms of understanding what it is, how it works, and how it impacts their businesses. And that's why that point around literacy is just so key, in terms of staying on top of those developments.

Outside of that, ensuring that really robust and well-tested, good and well-functioning governance arrangements are in place is also key. I referenced some Institute of Director's guidance on AI governance in the boardroom. It's an exceptionally well-written piece around what board members should be thinking about doing, asking, seeing, what does 'good' look like? And so, really getting to grips with those things is of paramount importance to then be able to tackle and address some of the more nuanced or detailed challenges or even opportunities that AI presents moving forwards.

Will Chalk:

Matt, Fiona, thank you so much. And thank you for listening to this episode of Ashurst Legal Outlook. To listen to more episodes in this Board Priorities mini-series, just search for Ashurst Legal Outlook on Apple Podcasts, Spotify, or your favourite podcast player. And please visit our Board Priorities homepage and read more about our top priorities for boards in 2026. That's where you'll find our contact details, do feel free to get in touch.

To receive news and alerts on the kinds of issues raised today, subscribe to our regular governance and compliance updates via ashhurst.com.

I'll be back soon with the next episode in this Board Priorities mini-series. Until then, this is me, Will Chalk, saying thank you very much for listening, and goodbye for now.

Keep up to date

Listen to our podcasts on Apple Podcasts, YouTube or Spotify, so you can take us on the go. Sign up to receive the latest legal developments, insights and news from Ashurst.

The information provided is not intended to be a comprehensive review of all developments in the law and practice, or to cover all aspects of those referred to. Listeners should take legal advice before applying it to specific issues or transactions.