Legal development

AI Powered Payment Agents: The next payments revolution?

Brown smear

    Agentic AI is the 'new kid on the block' in payments innovation. It could revolutionise payments - if legal challenges can be navigated.

    Agentic AI generally refers to AI systems with enhanced autonomy, that can behave and interact autonomously in order to reach their objectives. It is capable of executing instructions, learning from its environment, and adapting processes for executing those instructions. We are already witnessing varied models of agentic AI being deployed in the payments sector – so-called "agentic payments". Examples include Mastercard's "Agent Pay", Visa's "Intelligent Commerce", Amazon's "Buy for Me" and Google's "Shop with AI Mode".

    Notably, AI agents, through advances on the integration of large language models with agent-based systems, can now carry out payment transactions on behalf of a user within a set of predefined parameters provided by the user. This can operate entirely on a conversational basis in contrast to the traditional UX of selecting options or completing fields in a mobile or web interface. AI agents are also capable of learning from a user's behaviour, predicting the payment transactions that a user may wish to make, and even initiating those payments without any human intervention. This is especially prevalent in the online retail sector, with agentic AI being programmed to assist customers, identifying discounts or more cost-efficient alternatives, and ultimately purchasing goods and services on behalf of these customers using their payment details.

    In this update, we identify some of the legal challenges with agentic AI in payments in the EU and UK under payment services regulation, data protection regulation and burgeoning AI regulation. Firms operating in this space, particularly retailers and their payment service providers should also think carefully about the application of consumer protection requirements – especially in relation to disclosures, authority and informed consent by consumers to use agentic AI to conclude purchases of good or services.

    Payment services regulation

    1. Legislation on the back foot

    While momentum is building behind the use of agentic AI in the payments space, it is faced with a legal and financial services regulatory framework that lag behind.

    For example, the Second EU Payment Services Directive (PSD2) does not refer to the use of AI in the payment services sector, much less seek to regulate its usage. While understandable, given PSD2 was published nearly 10 years ago, less forgivable is the EU legislator's draft texts of the Third Payment Services Directive which references the use of AI just once in its deployment for fraud prevention. While the EU regulatory framework for payments, and its UK counterpart, include the concept of a commercial agent capable of acting on a payer's or payee's behalf, neither envisages the use of AI to fulfil the role.

    A good example of regulatory uncertainty is the burgeoning use of open banking: Many early agentic AI payments use cases focused on card payments. Given the pace of advancement, we can expect the use of agentic AI to expand to include payments utilising account-to-account payments, via open banking interfaces. This raises the question of whether the operator or provider of the AI agent assisting in initiating payment transactions would need obtain regulatory authorisation as a payment initiation service provider.

    2. Liability for unauthorised transactions

    The question of who is liable for unauthorised transactions initiated by an AI agent lies at the heart of legal risks for merchants and payment service providers receiving or executing agentic-AI-driven payments.

    For example, under PSD2, a payment transaction is considered to be authorised only if the payer has provided consent for the execution of that payment transaction. PSD2 does not prescribe how consent is provided, but only that it should be given in the form agreed by the payer and the payer's payment service provider (PSP). In the context of a customer using a payment card to purchase goods or service, that means as agreed between the customer and the card issuer. In other words, it may be insufficient that the customer had agreed with the merchant or a third party that an AI agent could use the card details to complete the payment.

    While customers can give providers of agentic AI the necessary authority to execute payment transactions on their behalf, such authority will need to be robust and provide clear consent to the AI agent. It remains unclear who bears liability in the event that an AI agent exceeds the scope of its initial mandate or interprets its instructions incorrectly.

    Although the level or risk will depend on the level of autonomy that the agentic AI is given, the additional layer of liability is likely to introduce uncertainty and will need to be clearly addressed in contractual terms. In the near term, it may be the case that merchants and payment service providers experience greater volumes of refund, chargeback and other dispute processes where agentic AI fails to deliver.

    3. Strong Customer Authentication

    In the EU and UK, Strong Customer Authentication (SCA) requirements under PSD2 (and retained in the UK) - will be an important area to navigate when using AI agents to initiate payments.

    SCA will be required where an AI agent initiates a payment on behalf of a payer, or establishes a mandate for merchant-initiated transactions (unless an exemption or exclusion applies). In such cases, the payer's PSP will need to authenticate the payer using two of the three elements of inherence, knowledge and possession. The SCA elements are generally based on human interaction e.g. providing a passcode, passing a face recognition test or being in possession of a mobile device or payment card. At first glance, this may present an issue for an AI agent, acting without a human in the loop at the point of sale.

    However, there are a number of ways in which AI agents can facilitate strong consumer authentication processes. These can include mechanisms involving tokenisation of credentials, the use of biometric data on mobile devices, and processes to help identify and provide context for SCA requests. In addition, there are a number of agentic AI payment use cases that could benefit from existing exemptions and exclusions from SCA, e.g. whitelisted beneficiaries and merchant-initiated transactions. So, whilst regulation may be creating uncertainty, we are certainly seeing novel uses of existing exemptions and the testing of current perimeters– the march of consumer spending (and the tech innovation which enables that) continues unabated.

    4. Sensitive payment data

    AI agents that assist customers in purchasing goods and services would generally need to have access to a customer's payment credentials (e.g. credit card number, expiry date etc.). A customer sharing payment credentials with a third party agentic AI provider may be prevented from doing so under existing regulatory provisions and contractual restrictions on sharing sensitive payment data.

    For example, under PSD2, in order to be eligible for refunds for unauthorised transactions, customers are required to ensure "all reasonable steps" to keep their personalised security credentials safe. Similarly, the customer's PSP is required to ensure that these credentials are inaccessible to everyone bar the customer. It is also standard market practice that contracts governing a customer's relationship with its PSP (e.g. a card issuer) would likely contain provisions restricting the customer from sharing credentials with third parties.

    Firms offering agentic AI services to customers which includes payment initiation will need to consider potential mitigants, such as structuring data flows so that the AI agent does not receive payment credentials directly, or payment credentials only being shared in tokenised form.

    Other regulatory considerations

    Appropriate consideration must be given to the personal data flows involved with AI agent use cases and existing data protection laws. This can include a focus on transparency with customers regarding personal data processed and the use of an AI agent (such as through privacy notices / disclaimers), putting in place any relevant contractual arrangements between businesses utilising AI agents and the providers of such AI agents, and between firms and their customers.

    Looking towards the staggered implementation of the EU AI Act, firms should continue to undertake appropriate reviews of their AI use to determine the application of the EU AI Act and any relevant obligations. The definition of AI systems under the EU AI Act is broad and accounts for varying levels of AI autonomy, which should capture agentic AI.

    Finally, the Financial Conduct Authority (FCA) has re-affirmed its continued reliance on existing laws for the regulation of AI in the UK financial services – however, along with the Digital Regulation Cooperation Forum (bringing together the FCA, Competition and Markets Authority, Information Commissioner's Office and Office of Communications) recently launching its call for views on agentic AI and regulatory challenges, it remains to be seen whether this position will be stress tested by an incoming agentic AI payments revolution.

    As with so many topics relating to the intersectionality of AI and financial services products, it’s a case of watch this space.

    Other author: Saba Nasrolahi; Junior Associate

    The information provided is not intended to be a comprehensive review of all developments in the law and practice, or to cover all aspects of those referred to.
    Readers should take legal advice before applying it to specific issues or transactions.