Legal development

Free agent? Not quite... new UK guidance on agentic AI for businesses

bar chart on screen

    On 9 March 2026, the Competition and Markets Authority (CMA) published new guidance on how businesses should ensure compliance with UK consumer protection law when using AI agents (Guidance), alongside a detailed policy paper examining how agentic AI could transform consumer markets (Policy Paper).

    What you need to know 

    • The CMA's latest guidance makes clear that consumer protection law, and the CMA's direct enforcement power (under the Digital Markets, Competition and Consumer Act 2024 (DMCC Act)) to impose fines of up to 10% of a company's global turnover for breaches (including for misleading practices), applies whether claims or practices are delivered by humans or AI systems.
    • Agentic AI (i.e., systems that can autonomously plan, decide and act on a user's behalf) represents a step change from current AI tools. Whilst the technology offers significant potential benefits, it also raises heightened risks around manipulation, errors and loss of consumer agency.
    • The CMA has made clear that existing consumer protection law applies in full to AI-powered systems. Businesses remain responsible for what their AI agents do, in the same way as they are responsible for actions of their employees. 

    What you need to do

    • Ensure you are aware of how consumer law may apply to your business.
    • Train AI agents to comply with consumer law before deployment.
    • Monitor AI agent performance, including through appropriate human oversight to catch any errors (including hallucinations) and ensure legal compliance. 
    • Be transparent with customers about the use of AI agents, particularly where this might impact their decision to purchase a good or use a service.
    • Act quickly to refine AI agents when problems are identified, especially where agents interact with large numbers of people or vulnerable consumers.

    What is agentic AI?

    Artificial Intelligence (AI) has increasingly become part of everyday life, shaping how people access information, consume services and make decisions. To date, however, most consumer-facing AI has operated as a tool which has supported decisions, whilst coordination, monitoring and any required action to receive an output, are performed by a human user. 

    Unlike traditional chatbots, agentic AI is artificial intelligence that has "agency": these agents are designed to mimic human decision-making and accomplish a specific task or goal from start to finish with minimal human intervention. They may assess goals, break tasks into subtasks, retrieve real-time data (including personal data), execute actions autonomously (such as making payments or deleting emails), and store memory of past interactions to improve over time.

    In effect, as the CMA Policy Paper identifies, consumers utilising AI agents can shift from "using tools to delegating outcomes" changing how businesses and consumers engage with and use AI. The Guidance assesses specific examples of AI agents being used to interact with customers, including to handle customer queries, process refunds and manage marketing campaigns. 

    What businesses using AI agents should consider

    The CMA's Guidance sets out several core considerations for businesses using (or wanting to use) AI agents in consumer-facing interactions:

    Businesses remain responsible for the AI agents they use

    Consumer protection law requires businesses to treat customers fairly. This is irrespective of whether a consumer interacts with a person or an AI agent. The CMA is explicit in its Guidance: businesses remain responsible for what an AI agent does in the same way they are responsible for what an employee does. This applies even if a third party designed or provided the AI agent for the business to use.

    The CMA has indicated that it is likely unlawful if an AI agent steers, pressures or misleads consumers in a way that harms a consumer's economic interests.

    Businesses must train, monitor and address issues swiftly

    The CMA Guidance emphasises that a continuous compliance approach is needed when using AI agents and suggests three core considerations.

    Train and test: Think carefully about what the AI agent will be set up to do and how that might affect customers. Consumer protection principles should be embedded into the design of agentic systems. For example: 

    • training to ensure an agent will not mislead consumers; 
    • ensuring refund / cancellation rights are clear and not breached;
    • ensuring any marketing material fed into (or generated by) an agent includes accurate and up-to-date information about prices; and
    • ensuring that paid endorsements are properly labelled as such.

    Additionally, testing an AI agent's output is a crucial part of training. This includes evaluating the agent's performance, for example, through A/B testing (a method of comparing two versions of something, typically an app feature, website or online design aspect, to see which one performs better based on real user behaviour). 

    Monitor: Frequently check that AI agents are delivering correct results, behaving as intended, and complying with consumer protection law. Businesses should include human oversight to identify any mistakes, particularly as AI agents can "hallucinate" results that are incorrect. 

    Swiftly address issues: If an AI agent is not performing as expected or making errors, businesses must act quickly to address any problems. This is especially important where AI agents interact with large numbers of consumers, especially vulnerable consumers. The CMA has indicated that the sooner a business can rectify an issue, the better. If a business does not rectify a problem swiftly and an AI agent does something illegal, the business will be held accountable. 

    Businesses should be transparent when using AI agents

    Businesses should tell consumers when they are utilising AI agents, especially where dealing with an AI agent rather than a person might affect a consumer's decision to proceed or engage with the business. Additionally, businesses should not overstate or misrepresent what an agent can or cannot do.

    The CMA's Policy Paper: illustrative example – benefits, risks and safeguards 

    The Policy Paper highlights these considerations through an illustrative case study utilising a personal shopping and finance agent. In this example the agent monitors prices, negotiates offers, switches providers, and executes purchases on a consumer's behalf. The case study sets out both the potential benefits (reduced cognitive load, time savings, accessible optimisation) and the consumer law risks that businesses using agents might face (lack of transparency about limitations of the agent, erroneous switching decisions, hidden incentives). The safeguards the CMA identifies to mitigate these risks include: 

    • taking accountability (including liability) for the agent's actions;
    • training the agent to provide clear disclosure of important information;
    • ensuring the agent gets mandatory confirmation from users for high-risk actions;
    • maintaining robust audit logs that show rapid intervention and remedial actions when errors occur; and
    • using human oversight to monitor performance, complaints and feedback, with clear processes to resolve or escalate them as appropriate.

    Building on the CMA's existing AI frameworks and guidance 

    The Guidance and Policy Paper build on the CMA's earlier work on AI foundation models. This work outlined the impact of foundational AI models in consumer protection law and highlighted the need for transparency and accountability within the AI ecosystem. The Policy Paper expressly references these principles, making clear that transparency and accountability remain particularly important as systems become more autonomous. The CMA also considered the capabilities and risks of agentic AI in its Frontier AI: capabilities and risks discussion paper, published in April 2025.

    The CMA has also highlighted the relevance of its work on algorithmic pricing and competition law, referring to its blogs on AI and collusion published in November 2024 and last week. In its March 2026 blog, the CMA indicated that the use of AI agents could intensify risks around coordinated pricing outcomes (by learning and reacting to other AI agents in a concentrated market), even without explicit communication between businesses or instructions to do so. 

    The blog also emphasised the practical steps businesses should take to mitigate competition law risks. These include ensuring that competitively sensitive information is not shared with rivals (whether directly or indirectly through pricing software), and auditing input data and algorithmic approaches where appropriate. The CMA noted that businesses using the same algorithm as a competitor should take extreme care, particularly where there is a risk that pricing recommendations could be drawing on confidential information from rivals. Considering the new Guidance, this risk may be heightened if an AI Agent was free to make a recommendation to a customer with little to no oversight. 

    How is the CMA using AI?

    The Policy Paper also explains the steps the CMA is taking both to improve its understanding of AI and agentic systems and to implement AI within its own processes and functions. This includes its investment in in-house data science and technology expertise, its work to use AI expertise in screening for cartels (in particular to detect bid rigging in public procurement), and its work with other regulators including through the Digital Regulation Cooperation Forum for UK regulators and international forums.

    What are the risks and challenges for businesses?

    The Policy Paper identifies several material risks that may arise when deploying agentic AI. Businesses need to take appropriate steps to manage these risks, including by reference to the Guidance to ensure compliance with consumer protection law.

    Risk   Description
    Risk agent is not a "faithful servant AI agents may not act in accordance with consumer interests, potentially steering or manipulating users towards outcomes that benefit a business rather than the consumer. This may be the case in particular where agents deploy personalisation or adaptive behaviour.
    Errors and reliability AI agents may "hallucinate" incorrect information, with errors in consumer interactions potentially having significant real-world consequences.
    Bias and discrimination Agentic AI may amplify existing biases in data or decision-making, especially where outcomes emerge from opaque reasoning that is difficult to observe or explain. 
    Loss of agency Consumers may over-rely on AI agents, deferring too easily to the provided automated decisions and becoming less able to scrutinise an AI-driven result. 
    Agentic collusion Where multiple businesses deploy autonomous agents that optimise pricing, interaction between these systems may breach competition and consumer laws. 
    Lock-in Where agentic systems operate within closed ecosystems with limited interoperability, switching providers or moving consumer data may become more challenging leading to a risk of customer lock-in. 

    What should businesses do now?

    The Policy Paper and Guidance signal that agentic AI remains firmly on the CMA's radar. Whilst the AI technology continues to develop, the message for businesses is clear: the time to consider compliance is now, not when problems emerge.

    Businesses using agentic AI should:

    • review existing AI deployments against the framework set out in the Guidance (i.e. train, monitor and address issues swiftly) and assess whether current levels of human oversight are sufficient;
    • ensure legal teams are involved in AI procurement, training and deployment decisions, especially where third-party AI systems and tools are used; and
    • consider the competition law implications of using AI-informed pricing strategies, with reference to the CMA's guidance on algorithms and collusion. 

    Comment

    Given the rapid expansion in both the capabilities and adoption of AI and agentic systems, the CMA's Policy Paper and Guidance are helpful contributions to understanding how the UK's consumer protection regime may apply to address the challenges raised by rapid technological development in this field. The challenge for businesses will be how to embed compliance principles into their tools and agents, such that they can be trusted (both by the business and consumers) to deliver fair and transparent outcomes. The risks where agents lead to outcomes which may unfairly disadvantage consumers or cause consumers to take decisions which may not be in their best interests, are potentially significant given the CMA's direct enforcement and fining powers.

    Want to know more? 

    The information provided is not intended to be a comprehensive review of all developments in the law and practice, or to cover all aspects of those referred to.
    Readers should take legal advice before applying it to specific issues or transactions.