Regulators unite
Agentic AI – advanced AI systems requiring minimal supervision to accomplish tasks – is the focus of a cross-regulatory foresight paper recently published by the Digital Regulation Cooperation Forum (DRCF). In The Future of Agentic AI, members of the DRCF (the CMA, FCA, ICO and Ofcom) are unanimous in their vision of regulation as an enabler of innovation, so that emerging technologies develop to promote economic growth and competition without sacrificing consumer protection. All DRCF members share the view that existing UK legal frameworks apply to agentic AI in any event, and business must adapt its governance accordingly. Although this paper aims to stimulate debate, the emerging strategy is clear – ensuring that these frameworks, whether in current or future form, provide a clear and safe pathway to innovation for organisations deploying this novel technology.
Our key takeways:
- Regulation as an enabler to innovation: Maintaining the UK's pro-innovation, principles-based approach to AI regulation is an overarching theme and is key to facilitating innovation in the agentic AI sphere. Existing frameworks are applied flexibly by sectoral regulators in an increasingly coordinated way. Regulators are actively collaborating through shared horizon scanning, joint scenario analysis and coordinated stakeholder engagement. The paper acknowledges the apparent divergence of approach in the EU with the implementation of the AI Act and notes that, for businesses operating across both the UK and EU, this growing divergence will significantly impact compliance strategy.
- Identifying 'agentic' in practice: The DRCF sets out a five-level "spectrum of autonomy" - from "tool" through "assistant" and "operator" to largely theoretical "collaborator" and "autonomous actor". This taxonomy matters because the regulatory implications scale with the level of autonomy. An AI "tool" that summarises a document raises very different questions from an "operator" that autonomously executes expense claims, triggers payments, and shares personal data with third parties. Classifying systems along this spectrum will become a foundational step in governance, risk assessment and internal accountability.
- Novel vs amplified risk: Helpfully, the paper distinguishes "amplified" and "novel" risks. Many risks such as data protection, cybersecurity and automated decision-making are not new, but are intensified by the scale and autonomy of agentic systems. Others, however, are genuinely novel. "Action bundling", where agents execute multiple legal and commercial steps invisibly in a single flow, challenges traditional concepts of consent and consumer understanding. More strikingly, the paper highlights the genuinely novel phenomenon of algorithmic collusion, where AI agents spontaneously collude in price-setting, bidding, and financial market simulations, risking inadvertent anti-competitive outcomes. This split should inform firms' compliance plans: amplified risks often have established compliance playbooks that need updating, whereas novel risks may require entirely new governance mechanisms.
- Cross-regulatory exposure is real: The paper makes clear that agentic AI will not fit neatly within existing regulatory silos. A single deployment can simultaneously trigger multiple concerns, meaning that siloed compliance models will not work. Businesses will need to move towards cross-regulatory impact assessments at the design stage, rather than treating compliance as a downstream exercise. If - for example - an AI-driven payment agent commits fraud using personal data in the context of a digital service (such as a streaming platform), this could engage the remits of the FCA, ICO, Ofcom and CMA concurrently, and the effectiveness of any intervention will depend on how coordinated and collaborative a multi-regulator response can be in practice. This raises important questions around regulatory overlap, sequencing of enforcement, and the potential for inconsistent expectations across regimes - issues that are likely to become more acute as agentic systems evolve.
- The 'many hands' problem: The paper highlights the risks of fragmented accountability across model providers, system providers, and downstream deployers. As agentic systems interact and errors cascade, identifying responsibility becomes increasingly difficult, both for businesses and consumers seeking to exercise their rights. Crucially, the DRCF is clear that autonomy does not dilute legal accountability. For organisations, this places renewed emphasis on supply chain mapping and clear contractual allocation of responsibility across the entire AI stack.
- The Online Safety Act could catch Agentic AI: One regulatory exposure which has been hidden until now is the potential application of the Online Safety Act to AI agent. Where an AI agent retrieves information from multiple websites and presents it back to the user, it could be deemed to include (or be) a search engine under the Act. This would make the provider a regulated search service, with statutory obligations to assess and mitigate the risk of users encountering illegal content (such as fraud) and content that is harmful to children. For businesses deploying agentic AI tools that aggregate or compare third-party content, product comparison agents, research assistants, or switching services, this is a classification risk that warrants careful analysis before deployment.
What's on the horizon?
The DRCF has set out a forward-looking programme for 2026/27. This signals sustained regulatory focus on how consumers interact with AI-driven systems, including:
- evolving user interfaces between individuals, firms and digital services;
- consumer robotics and embodied AI;
- real-world consumer experiences of emerging technologies.
At the individual regulator level:
- ICO: A forthcoming statutory Code of Practice on AI and automated decision-making will materially raise the compliance bar, with potential evidential weight in enforcement.
- FCA: Cohort 2 of AI Live Testing is launching, the Supercharged Sandbox continues, and the Mills Review will report to the FCA Board in the summer on how advanced AI models could reshape retail financial services by 2030.
- Ofcom: The 2026/27 edition of Ofcom's Strategic Approach to AI will be published later this year, and Ofcom will continue to assess agentic AI adoption in telecoms and its impact under the Online Safety Act.
- CMA: Practical guidance for businesses on agentic AI in consumer-facing services and pricing has been published, and the CMA will continue domestic and international collaboration, including through its chairing of the International Competition Network's Technologist Group.
A single agentic AI deployment could simultaneously generate data protection, online safety, financial regulatory and consumer protection issues involving multiple regulators, each with its own focus. Co-ordinating a prompt, effective cross-regulatory response will be a true test of supervisory collaboration.