APRA and ASIC Sound the AI Alarm for Boards and Executives
The Australian Prudential Regulation Authority (APRA) and Australian Securities and Investments Commission (ASIC) have sent powerful messages to regulated entities regarding AI, cyber security and operational resilience in recent open letters to industry.
APRA's open letter dated 30 April 2026 to all regulated entities set out its observations and expectations in managing AI-related risk, including the use of AI agents. APRA’s communication marks a genuine turning point – a clear signal that AI adoption across financial services has moved well beyond experimentation, yet risk, accountability, security and operational resilience frameworks have not kept pace with the scale and complexity of deployment.
That letter was shortly followed by ASIC’s open letter dated 8 May 2026 to all licensees and market participants to urgently strengthen their cyber resilience measures, warning that frontier AI is intensifying the global cyber risk environment and that entities must not wait for advanced AI tools to uplift their cyber security fundamentals. Together with the first civil penalty for inadequate cyber security conduct under general financial services licence obligations, these interventions signal a coordinated regulatory posture on AI and cyber governance.
The underlying message from both regulators is clear: boards and executives that fail to keep pace with technology and cyber-related risk will face serious consequences. Regulators will not wait for firms to catch up as AI continues to outpace traditional lawmaking cycles, and the gap between AI ambition and AI governance is widening.
APRA's letter came after it conducted a targeted review on a select group of large banks, insurers and superannuation trustees in late 2025. The regulator observed differing levels of maturity across governance, risk management and operational resilience, and observed that assurance practices are not keeping pace with the scale, speed and complexity of AI adoption.
The regulator's findings and expectations fall into the following four areas:
| Area | APRA Expectations |
|---|---|
| Security practices |
Entities should:
|
| Lagging governance maturity |
Entities should have governance arrangements that include:
|
| Supplier risk |
Entities should manage supplier risks including:
|
| Change management and assurance |
Entities should adopt effective assurance including:
|
ASIC’s letter focuses on cyber resilience basics and its message is blunt: frontier AI models are lowering the barrier to sophisticated cyber activity, increasing the speed and scale of attacks, and enabling new forms of exploitation. This is not a distant or hypothetical risk – it is here now. ASIC expects entities to return to first principles: strong cyber resilience is not built on novel tools, but on consistent execution of well-established controls, supported by clear governance and adequate resourcing.
ASIC’s call to action is practical, with a shopping list of takeaway actions including: reassess cyber plans against the most critical risks in today’s threat environment; confirm that governance frameworks enable clear decision-making and escalation at pace; identify and protect critical assets; strengthen cyber security fundamentals by regularly reviewing and validating core controls; promptly patch systems; minimise attack surface; and prepare for incident response with tested playbooks.
The regulator emphasises that these are not new expectations. What has changed is the new AI-filled environment in which entities are now operating. Small weaknesses can now have serious, cascading consequences.
Neither regulator is messing around:
Management and Board reporting may require uplift to ensure adequate evidencing of the expected levels of governance, accountability, training and testing.
These letters reinforce that boards and executives must treat technology and cyber-related governance as a core oversight obligation. Rather than creating separate AI governance structures, boards and executives should demonstrably embed AI risk management within existing governance, risk and compliance frameworks – extending and adapting current policies, risk appetite statements, escalation pathways and board-level reporting to address AI-specific exposures, and maintaining the evidence, documentation and controls needed to prove it.
The message is unambiguous: cyber and AI risk is governance risk, and governance risk is accountability risk. Organisations behind on data governance, cyber security and privacy cannot afford to layer AI on top of weak foundations – the consequences will be severe. Now is the time to pressure-test your governance frameworks against regulatory expectations before the regulators do it for you.
These letters are part of a global trend. Regulators worldwide are waking up to AI risk – and acting fast.
In the UK, a January 2026 House of Commons Treasury Committee report criticised the Bank of England, the Financial Conduct Authority (FCA) and Treasury for slow AI action and for exposing consumers and the financial system to potentially serious harm. The response was swift as the FCA launched a review into AI’s impacts on consumers, retail financial markets and regulators a week later, and in March 2026 the Digital Regulation Cooperation Forum (consisting of four UK regulators) published a foresight paper on agentic AI confirming existing legal frameworks apply to agentic AI and businesses must adapt their governance now.
The recent release of Anthropic's Mythos model has intensified concerns within governments. For example, in April 2026, the UK Government issued an open letter urging business leaders to fundamentally rethink cyber risk.
In Hong Kong, a coalition of financial regulators (including the Hong Kong Monetary Authority) launched the "GenA.I. Sandbox++” initiative in March 2026 – extending coverage of an earlier pilot to multiple financial sectors with a focus on risk management, anti-fraud and customer experience.
The direction of travel is clear: AI governance is no longer optional and regulators globally are moving from guidance to enforcement. Australian boards and executives must treat APRA and ASIC's letters as part of a coordinated global shift that demands immediate attention and action from boards.
This publication is a joint publication from Ashurst Australia and Ashurst Risk Advisory Pty Ltd, which are part of the Ashurst Group.
The Ashurst Group comprises Ashurst LLP, Ashurst Australia and their respective affiliates (including independent local partnerships, companies or other entities) which are authorised to use the name "Ashurst" or describe themselves as being affiliated with Ashurst. Some members of the Ashurst Group are limited liability entities.
Ashurst Australia (ABN 75 304 286 095) is a general partnership constituted under the laws of the Australian Capital Territory.
Ashurst Risk Advisory Pty Ltd is a proprietary company registered in Australia and trading under ABN 74 996 309 133.
The services provided by Ashurst Risk Advisory Pty Ltd do not constitute legal services or legal advice, and are not provided by Australian legal practitioners in that capacity. The laws and regulations which govern the provision of legal services in the relevant jurisdiction do not apply to the provision of non-legal services.
For more information about the Ashurst Group, which Ashurst Group entity operates in a particular country and the services offered, please visit www.ashurst.com.
This material is current as at 11 May 2026 but does not take into account any developments to the law after that date. It is not intended to be a comprehensive review of all developments in the law and in practice, or to cover all aspects of those referred to, and does not constitute legal advice. The information provided is general in nature, and does not take into account and is not intended to apply to any specific issues or circumstances. Readers should take independent legal advice. No part of this publication may be reproduced by any process without prior written permission from Ashurst. While we use reasonable skill and care in the preparation of this material, we accept no liability for use of and reliance upon it by any person.