Legal development

Use of AI and MIFID: How your firm can comply with obligations

Insight Hero Image

    ESMA has published a statement on the use of AI systems by investment firms and relevant MIFID considerations. This appears to be the most detailed and significant statement so far by ESMA for investment firms in relation to this area, (although ESMA officials have published speeches on the increasing use of AI systems by investment firms and the importance of investor protection) and contains important information for firms.

    The Statement is against the backdrop of increased regulatory scrutiny in relation to AI (see briefings here and here for more information) and the finalisation of the EU AI Act (for more information, see briefing here).

    The Statement by ESMA aims to set out how firms using or planning to use AI technologies can comply with MIFID, particularly around the areas of organisational requirements, conduct of business requirements and the importance of prioritising clients' best interests. This is designed to be a statement for investment firms in instances where AI tools are specifically developed/officially adopted by the firm or bank, but also where the uses third party AI technologies (e.g. ChatGPT) with or without the direct approval of senior management.

    ESMA has identified the following risks for clients in relation to the use of AI systems:

    • lack of accountability and oversight (over reliance);
    • lack of transparency and explainability/interpretability;
    • security/data privacy; and
    • robustness/reliability of output, quality of training data.

    Client best interests and information to clients

    • Firms need to act in the best interests of clients and be transparent about the role of AI in investment decision making processes related to the provision of investment services. Firms are expected to provide information on how they use AI tools and must ensure that such information is presented in a clear and not misleading manner (regulators in other jurisdictions have already taken action against firms misleading investors about the use of AI/extent of use of AI).
    • Investment firms using AI for client interactions, such as chatbots or other types of AI-related automated systems, need to disclose to clients the use of the technology through interactions.

    Organisational requirements

    • Firms' management bodies need to have an appropriate understanding for how AI technologies are applied and used within their firm and ensure appropriate oversight of these technologies. This is to ensure alignment of the AI systems with the firm's overall strategy, risk tolerance and compliance framework.
    • The management body is responsible for establishing robust governance structures monitoring the performance and impact of AI tools on the firm's services.
    •  There needs to be effective risk management frameworks specific to AI implementation and application.
    • Senior management need to cultivate a culture of risk ownership, transparency and accountability, with the implications of AI use being regularly assessed and necessary adjustments made in light of changing market conditions.
    • The data used as input for AI tools employed for investment decision making processes needs to be relevant, sufficient, and representative.
    • Firms should have clear docuentation and reporting mechanisms to ensure transparency and accountability in AI-related risk management practices.
    • Firms are expected to have comprehensive testing and monitoring systems to assess the performance and impact of AI applications in their offerings (proportionality principle applies). Specific attention should be given to areas where AI has the most significant influence on firms' processes and client services related to the provision of retail investment services.
    • Firms should note relevant MIFID requirements on outsourcing of critical and important operational functions when using third party AI tools to provide investment services.
    • Firms should carry out post interaction assessments to monitor and evaluate the process of delivery of information directly or indirectly through AI-driven mechanisms. ESMA considers these assessments critical for ensuing ongoing compliance with MIFID and preventing the spreading of inaccurate/misleading information about investment products and services.
    • Firms need to have adequate training programs (covering potential risks, ethical considerations and regulatory implications) for staff on the topic of AI, to ensure that staff can manage and work with AI .

    Conduct of business requirements

    • Firms should have rigorous quality assurance processes for their AI tools (this should include thorough testing of algorithms and outcomes of accuracy).
    • Robust controls need to be in place when AI systems are used in the product governance context. There needs to be an increased level of diligence, especially when ensuring the suitability of services and financial instruments provided to each client.
    • Stress tests need to be carried out to see how AI systems perform under extreme market pressure.

    Record-keeping

    • ESMA expects comprehensive records on AI utilisation and on any related complaints. These records will need to set out the use of AI technologies in various parts of investment services provision (e.g. to include data sources used and algorithms implemented).

    The information provided is not intended to be a comprehensive review of all developments in the law and practice, or to cover all aspects of those referred to.
    Readers should take legal advice before applying it to specific issues or transactions.