Free agent? Not quite... new UK guidance on agentic AI for businesses
On 9 March 2026, the Competition and Markets Authority (CMA) published new guidance on how businesses should ensure compliance with UK consumer protection law when using AI agents (Guidance), alongside a detailed policy paper examining how agentic AI could transform consumer markets (Policy Paper).
Artificial Intelligence (AI) has increasingly become part of everyday life, shaping how people access information, consume services and make decisions. To date, however, most consumer-facing AI has operated as a tool which has supported decisions, whilst coordination, monitoring and any required action to receive an output, are performed by a human user.
Unlike traditional chatbots, agentic AI is artificial intelligence that has "agency": these agents are designed to mimic human decision-making and accomplish a specific task or goal from start to finish with minimal human intervention. They may assess goals, break tasks into subtasks, retrieve real-time data (including personal data), execute actions autonomously (such as making payments or deleting emails), and store memory of past interactions to improve over time.
In effect, as the CMA Policy Paper identifies, consumers utilising AI agents can shift from "using tools to delegating outcomes" changing how businesses and consumers engage with and use AI. The Guidance assesses specific examples of AI agents being used to interact with customers, including to handle customer queries, process refunds and manage marketing campaigns.
The CMA's Guidance sets out several core considerations for businesses using (or wanting to use) AI agents in consumer-facing interactions:
Consumer protection law requires businesses to treat customers fairly. This is irrespective of whether a consumer interacts with a person or an AI agent. The CMA is explicit in its Guidance: businesses remain responsible for what an AI agent does in the same way they are responsible for what an employee does. This applies even if a third party designed or provided the AI agent for the business to use.
The CMA has indicated that it is likely unlawful if an AI agent steers, pressures or misleads consumers in a way that harms a consumer's economic interests.
The CMA Guidance emphasises that a continuous compliance approach is needed when using AI agents and suggests three core considerations.
Train and test: Think carefully about what the AI agent will be set up to do and how that might affect customers. Consumer protection principles should be embedded into the design of agentic systems. For example:
Additionally, testing an AI agent's output is a crucial part of training. This includes evaluating the agent's performance, for example, through A/B testing (a method of comparing two versions of something, typically an app feature, website or online design aspect, to see which one performs better based on real user behaviour).
Monitor: Frequently check that AI agents are delivering correct results, behaving as intended, and complying with consumer protection law. Businesses should include human oversight to identify any mistakes, particularly as AI agents can "hallucinate" results that are incorrect.
Swiftly address issues: If an AI agent is not performing as expected or making errors, businesses must act quickly to address any problems. This is especially important where AI agents interact with large numbers of consumers, especially vulnerable consumers. The CMA has indicated that the sooner a business can rectify an issue, the better. If a business does not rectify a problem swiftly and an AI agent does something illegal, the business will be held accountable.
Businesses should tell consumers when they are utilising AI agents, especially where dealing with an AI agent rather than a person might affect a consumer's decision to proceed or engage with the business. Additionally, businesses should not overstate or misrepresent what an agent can or cannot do.
The Policy Paper highlights these considerations through an illustrative case study utilising a personal shopping and finance agent. In this example the agent monitors prices, negotiates offers, switches providers, and executes purchases on a consumer's behalf. The case study sets out both the potential benefits (reduced cognitive load, time savings, accessible optimisation) and the consumer law risks that businesses using agents might face (lack of transparency about limitations of the agent, erroneous switching decisions, hidden incentives). The safeguards the CMA identifies to mitigate these risks include:
The Guidance and Policy Paper build on the CMA's earlier work on AI foundation models. This work outlined the impact of foundational AI models in consumer protection law and highlighted the need for transparency and accountability within the AI ecosystem. The Policy Paper expressly references these principles, making clear that transparency and accountability remain particularly important as systems become more autonomous. The CMA also considered the capabilities and risks of agentic AI in its Frontier AI: capabilities and risks discussion paper, published in April 2025.
The CMA has also highlighted the relevance of its work on algorithmic pricing and competition law, referring to its blogs on AI and collusion published in November 2024 and last week. In its March 2026 blog, the CMA indicated that the use of AI agents could intensify risks around coordinated pricing outcomes (by learning and reacting to other AI agents in a concentrated market), even without explicit communication between businesses or instructions to do so.
The blog also emphasised the practical steps businesses should take to mitigate competition law risks. These include ensuring that competitively sensitive information is not shared with rivals (whether directly or indirectly through pricing software), and auditing input data and algorithmic approaches where appropriate. The CMA noted that businesses using the same algorithm as a competitor should take extreme care, particularly where there is a risk that pricing recommendations could be drawing on confidential information from rivals. Considering the new Guidance, this risk may be heightened if an AI Agent was free to make a recommendation to a customer with little to no oversight.
The Policy Paper also explains the steps the CMA is taking both to improve its understanding of AI and agentic systems and to implement AI within its own processes and functions. This includes its investment in in-house data science and technology expertise, its work to use AI expertise in screening for cartels (in particular to detect bid rigging in public procurement), and its work with other regulators including through the Digital Regulation Cooperation Forum for UK regulators and international forums.
The Policy Paper identifies several material risks that may arise when deploying agentic AI. Businesses need to take appropriate steps to manage these risks, including by reference to the Guidance to ensure compliance with consumer protection law.
| Risk | Description |
|---|---|
| Risk agent is not a "faithful servant" | AI agents may not act in accordance with consumer interests, potentially steering or manipulating users towards outcomes that benefit a business rather than the consumer. This may be the case in particular where agents deploy personalisation or adaptive behaviour. |
| Errors and reliability | AI agents may "hallucinate" incorrect information, with errors in consumer interactions potentially having significant real-world consequences. |
| Bias and discrimination | Agentic AI may amplify existing biases in data or decision-making, especially where outcomes emerge from opaque reasoning that is difficult to observe or explain. |
| Loss of agency | Consumers may over-rely on AI agents, deferring too easily to the provided automated decisions and becoming less able to scrutinise an AI-driven result. |
| Agentic collusion | Where multiple businesses deploy autonomous agents that optimise pricing, interaction between these systems may breach competition and consumer laws. |
| Lock-in | Where agentic systems operate within closed ecosystems with limited interoperability, switching providers or moving consumer data may become more challenging leading to a risk of customer lock-in. |
The Policy Paper and Guidance signal that agentic AI remains firmly on the CMA's radar. Whilst the AI technology continues to develop, the message for businesses is clear: the time to consider compliance is now, not when problems emerge.
Businesses using agentic AI should:
Given the rapid expansion in both the capabilities and adoption of AI and agentic systems, the CMA's Policy Paper and Guidance are helpful contributions to understanding how the UK's consumer protection regime may apply to address the challenges raised by rapid technological development in this field. The challenge for businesses will be how to embed compliance principles into their tools and agents, such that they can be trusted (both by the business and consumers) to deliver fair and transparent outcomes. The risks where agents lead to outcomes which may unfairly disadvantage consumers or cause consumers to take decisions which may not be in their best interests, are potentially significant given the CMA's direct enforcement and fining powers.
The information provided is not intended to be a comprehensive review of all developments in the law and practice, or to cover all aspects of those referred to.
Readers should take legal advice before applying it to specific issues or transactions.