Business Insight

APRA and ASIC Sound the AI Alarm for Boards and Executives

building texture

    What you need to know

    • APRA and ASIC have sent powerful messages to regulated entities regarding AI, cyber security and operational resilience in recent letters to industry.
    • APRA expects all regulated entities to demonstrate strong AI governance and risk management. The regulator called out observed weaknesses in the security and resilience of AI systems used in critical operations and AI risk in the supply chain.
    • ASIC reinforced their need for organisations to provide evidence of cyber resilience basics, clearly stating that for Boards, assurance alone was not enough.
    • Regulators will not wait for organisations to catch up as AI advances – they will take stronger supervisory action and pursue enforcement where entities fail to adequately identify, manage or control AI and cyber-AI risks.

    What you need to do

    • Assess and evidence your cyber and AI posture against the letters. These letters set the minimum standards expected by regulators, and need to be reviewed in the context of the complexity of your organisation, your critical operations, the data you hold, your role in the financial services industry and the impact on customers of any disruption or incident.
    • Prioritise AI and cyber governance at board level. Ensure the board is sufficiently AI and cyber literate to understand and oversee AI and cyber related risks. Access to the right experts, uplifting Board reporting, training and independent reviews are clear approaches to providing the evidence regulators need to see.
    • Ensure directors are properly informed about AI-related risks. Boards need to go beyond merely applying an inquiring mind and actively pressing management with challenging questions on emerging AI and cyber risks – this needs to be evidenced in reporting and minutes.
    • Remember that governance and accountability is dynamic, not set-and-forget – and must be unambiguous. Ensure governance structures are tested continuously and are fit for purpose. Cyber and AI need to be clearly articulated in FAR accountability statements, evidenced in Board and management reporting.
    • You need to test your resilience. The golden thread across both letters is the expectation that each organisation actively contribute to ensuring our financial system is resilient – the need to assess and test the AI and cyber risks in third (and fourth) parties is clear – and requires you to have well tested plans in place to accelerate recovery from disruptive incidents.
    • Prepare for regulator engagement. Organisations must be ready to demonstrate defensible governance and documented risk management strategies.

    The Australian Prudential Regulation Authority (APRA) and Australian Securities and Investments Commission (ASIC) have sent powerful messages to regulated entities regarding AI, cyber security and operational resilience in recent open letters to industry.

    APRA's open letter dated 30 April 2026 to all regulated entities set out its observations and expectations in managing AI-related risk, including the use of AI agents. APRA’s communication marks a genuine turning point – a clear signal that AI adoption across financial services has moved well beyond experimentation, yet risk, accountability, security and operational resilience frameworks have not kept pace with the scale and complexity of deployment.

    That letter was shortly followed by ASIC’s open letter dated 8 May 2026 to all licensees and market participants to urgently strengthen their cyber resilience measures, warning that frontier AI is intensifying the global cyber risk environment and that entities must not wait for advanced AI tools to uplift their cyber security fundamentals. Together with the first civil penalty for inadequate cyber security conduct under general financial services licence obligations, these interventions signal a coordinated regulatory posture on AI and cyber governance.

    The underlying message from both regulators is clear: boards and executives that fail to keep pace with technology and cyber-related risk will face serious consequences. Regulators will not wait for firms to catch up as AI continues to outpace traditional lawmaking cycles, and the gap between AI ambition and AI governance is widening.

    APRA is clear: AI is not "just another technology"

    APRA's letter came after it conducted a targeted review on a select group of large banks, insurers and superannuation trustees in late 2025. The regulator observed differing levels of maturity across governance, risk management and operational resilience, and observed that assurance practices are not keeping pace with the scale, speed and complexity of AI adoption.

    The regulator's findings and expectations fall into the following four areas:

    Area  APRA Expectations
    Security practices 

    Entities should:

    • assess AI implications for operational resilience and have credible fallback processes for critical operations;
    • implement AI-specific security controls (e.g. strong privileged access management, timely patching, penetration testing and controls over agentic workflows);
    • conduct robust security testing across AI-generated code and software; and
    • continuously consider third-party and concentration implications in relation to common platforms, services and providers.
    Lagging governance maturity

    Entities should have governance arrangements that include:

    • frameworks (policies, standards, guidance) and reporting lines to promote safe, responsible and sustainable adoption of AI;
    • ownership and accountability across the AI lifecycle;
    • an inventory of AI tooling and use cases;
    • human involvement for high-risk decisions and accountability; and
    • staff training on AI use, misuse, limitations and secure practices.
    Supplier risk

    Entities should manage supplier risks including:

    • mapping and maintaining visibility over the full AI supply chain;
    • contractual and governance arrangements providing transparency, auditability and assurance;
    • a capability to understand AI model behaviour and material changes; and
    • active management of concentration risk, including credible and feasible exit or substitution arrangements.
    Change management and assurance

    Entities should adopt effective assurance including:

    • employing globally recognised control frameworks and change control for AI;
    • applying integrated assurance across cyber security, data governance, model risk, resilience and conduct;
    • second line risk management and internal audit functions with technical capability for independent AI assessment (including AI agents); and
    • comprehensive, continuous risk and security assessments proportionate to criticality of the use case.

    ASIC makes perfection the enemy of good

    ASIC’s letter focuses on cyber resilience basics and its message is blunt: frontier AI models are lowering the barrier to sophisticated cyber activity, increasing the speed and scale of attacks, and enabling new forms of exploitation. This is not a distant or hypothetical risk – it is here now. ASIC expects entities to return to first principles: strong cyber resilience is not built on novel tools, but on consistent execution of well-established controls, supported by clear governance and adequate resourcing.

    ASIC’s call to action is practical, with a shopping list of takeaway actions including: reassess cyber plans against the most critical risks in today’s threat environment; confirm that governance frameworks enable clear decision-making and escalation at pace; identify and protect critical assets; strengthen cyber security fundamentals by regularly reviewing and validating core controls; promptly patch systems; minimise attack surface; and prepare for incident response with tested playbooks.

    The regulator emphasises that these are not new expectations. What has changed is the new AI-filled environment in which entities are now operating. Small weaknesses can now have serious, cascading consequences.

    The regulators are flexing muscles

    Neither regulator is messing around:

    • APRA will take stronger supervisory action and pursue enforcement where entities fail to adequately identify, manage or control AI risks, including the security and resilience of AI that enables critical operations. This extends to individual accountability – boards and executives cannot delegate away their oversight obligations. It will assess governance structures, decision-making quality and evidence of active stewardship when assessing culpability.
    • ASIC wants entities to return to first principles to achieve strong cyber resilience to avoid serious harm to businesses and consumers.

    Management and Board reporting may require uplift to ensure adequate evidencing of the expected levels of governance, accountability, training and testing.

    These letters reinforce that boards and executives must treat technology and cyber-related governance as a core oversight obligation. Rather than creating separate AI governance structures, boards and executives should demonstrably embed AI risk management within existing governance, risk and compliance frameworks – extending and adapting current policies, risk appetite statements, escalation pathways and board-level reporting to address AI-specific exposures, and maintaining the evidence, documentation and controls needed to prove it.

    The message is unambiguous: cyber and AI risk is governance risk, and governance risk is accountability risk. Organisations behind on data governance, cyber security and privacy cannot afford to layer AI on top of weak foundations – the consequences will be severe. Now is the time to pressure-test your governance frameworks against regulatory expectations before the regulators do it for you.

    A global regulatory awakening

    These letters are part of a global trend. Regulators worldwide are waking up to AI risk – and acting fast.

    In the UK, a January 2026 House of Commons Treasury Committee report criticised the Bank of England, the Financial Conduct Authority (FCA) and Treasury for slow AI action and for exposing consumers and the financial system to potentially serious harm. The response was swift as the FCA launched a review into AI’s impacts on consumers, retail financial markets and regulators a week later, and in March 2026 the Digital Regulation Cooperation Forum (consisting of four UK regulators) published a foresight paper on agentic AI confirming existing legal frameworks apply to agentic AI and businesses must adapt their governance now.

    The recent release of Anthropic's Mythos model has intensified concerns within governments. For example, in April 2026, the UK Government issued an open letter urging business leaders to fundamentally rethink cyber risk.

    In Hong Kong, a coalition of financial regulators (including the Hong Kong Monetary Authority) launched the "GenA.I. Sandbox++” initiative in March 2026 – extending coverage of an earlier pilot to multiple financial sectors with a focus on risk management, anti-fraud and customer experience.

    The direction of travel is clear: AI governance is no longer optional and regulators globally are moving from guidance to enforcement. Australian boards and executives must treat APRA and ASIC's letters as part of a coordinated global shift that demands immediate attention and action from boards.

    Want to know more?

    This publication is a joint publication from Ashurst Australia and Ashurst Risk Advisory Pty Ltd, which are part of the Ashurst Group.

    The Ashurst Group comprises Ashurst LLP, Ashurst Australia and their respective affiliates (including independent local partnerships, companies or other entities) which are authorised to use the name "Ashurst" or describe themselves as being affiliated with Ashurst. Some members of the Ashurst Group are limited liability entities.

    Ashurst Australia (ABN 75 304 286 095) is a general partnership constituted under the laws of the Australian Capital Territory.

    Ashurst Risk Advisory Pty Ltd is a proprietary company registered in Australia and trading under ABN 74 996 309 133.

    The services provided by Ashurst Risk Advisory Pty Ltd do not constitute legal services or legal advice, and are not provided by Australian legal practitioners in that capacity. The laws and regulations which govern the provision of legal services in the relevant jurisdiction do not apply to the provision of non-legal services.

    For more information about the Ashurst Group, which Ashurst Group entity operates in a particular country and the services offered, please visit www.ashurst.com.

    This material is current as at 11 May 2026 but does not take into account any developments to the law after that date. It is not intended to be a comprehensive review of all developments in the law and in practice, or to cover all aspects of those referred to, and does not constitute legal advice. The information provided is general in nature, and does not take into account and is not intended to apply to any specific issues or circumstances. Readers should take independent legal advice. No part of this publication may be reproduced by any process without prior written permission from Ashurst. While we use reasonable skill and care in the preparation of this material, we accept no liability for use of and reliance upon it by any person.