Legal development

AI and Copyright: Australia Presses Pause on Reform

colourful canyon swirls

    What you need to know

    • On 19 December 2025, the Productivity Commission released its final report in the Harnessing data and digital technology inquiry. And the verdict? Not yet.
    • Training AI models is a data-hungry business, typically requiring vast troves of copyrighted works. Under Australian copyright law, this often means obtaining licences from rightsholders. The Commission acknowledges the tension: easier access to training data could turbocharge innovation and economic growth, but it might also pull the rug out from under content creators.
    • Rather than diving headfirst into legislative reform, the Commission has opted for a "wait, monitor, and review" approach. Over the next three years, it will keep a close eye on how open web licensing markets evolve, whether creative incomes hold steady, and how overseas jurisdictions navigate their own copyright exceptions. In short: no changes to the Copyright Act… for now.

    Snapshot overview

    Australian copyright law currently requires AI developers to obtain licences before reproducing copyrighted material for training. Given several overseas countries have AI-training copyright exceptions, most AI model training currently occurs overseas (particularly in the United States). These exceptions include:

    • the doctrine of fair use in the United States, which permits certain unlicensed use of copyrighted works when the use is deemed "fair"; and
    • the European Union's text and data mining (TDM) exception, which allows commercial TDM unless rightsholders have “reserved their rights”.

    However, the scope and limits of those exceptions remain unsettled. Fair-use determinations are case‑specific and the appellate case law addressing AI training is still developing, creating near‑term uncertainty. In respect of the TDM exception, there is no standardised opt-out method yet, creating operational uncertainty for both developers and rightsholders.

    Despite no equivalent exception existing in Australia, direct licensing deals for "high-value" content such as books, news and music are emerging, and the Commission recommends allowing these markets to develop without interference at this stage. By contrast, licensing the open web at scale (which involves licensing hundreds of millions of websites) is commercially unfeasible in Australia today due to high transaction costs and the challenge of identifying rightsholders. This poses an issue given open web data is essential to training most AI models.

    Rather than proposing legislative reform, the Commission recommends a "wait, monitor, and review" approach for three years, during which the following areas are monitored:

    • licensing markets for open web materials;
    • AI's effect on creative incomes; and
    • how overseas courts set limits to AI-related copyright exceptions.

    If material uncertainties persist after three years, the Commission foreshadows a possible independent review of Australian copyright law and AI.

    Key implications for businesses

    1. Offshore AI model training remains most attractive… for now

    Businesses planning to train models in or from Australia face licensing requirements, while offshore training may offer more permissive pathways. The three-year monitoring period creates a relatively stable window for strategic planning, advocacy, and risk management.

    However, businesses that train their AI models overseas will face the law of those jurisdictions. The scope of exceptions, including the US doctrine of fair use and the EU TDM exception, remains unsettled, which raises litigation and regulatory uncertainty for models trained outside of Australia.

    If US courts narrow fair use or the EU tightens the TDM exception, Australia’s current licensing-based approach could become comparatively less burdensome and more attractive for training.

    2. Future transparency obligations could raise compliance costs and disclosure risk

    Rightsholders are pushing for greater training dataset transparency to detect unlicensed use of their works and support enforcement. On the other hand, developers claim that a model’s training dataset is a core part of what sets it apart from competitors’ models, and that disclosure could erode trade secrets and lead to reduced incentives to innovate.

    There are currently no applicable transparency obligations in Australia. In Europe, while the EU AI Act does require developers to publish a “sufficiently detailed summary” of training data for general-purpose AI models, rightsholders argue that this obligation is inadequate.

    If Australia or other key markets adopt disclosure obligations, developers would need to implement governance measures such as data provenance tracking and audit trails, potentially increasing compliance costs. These measures might also give rise to the risk of the disclosure of developers' trade secrets. Conversely, rightsholders would gain greater leverage to detect unlicensed uses and negotiate compensation.

    Authors: Anita Cade, Partner and Elise Jensen, Lawyer. 

    The information provided is not intended to be a comprehensive review of all developments in the law and practice, or to cover all aspects of those referred to.
    Readers should take legal advice before applying it to specific issues or transactions.