Business Insight

Privacy risks for AI and ADM in an evolving regulatory ecosystem 


    Increasing prevalence of AI and automated decision-making has driven the need for clear governance, transparency and security of customer data

    What you need to know

    • Privacy Awareness Week 2024 ‘Power up your privacy’ focusses on privacy and technology, with key principles of transparency, accountability and security. How do these core principles apply to the use of new technologies?
    • In 2024, Artificial Intelligence (AI) and Automated Decision-Making (ADM) are in the privacy spotlight. AI and ADM applications are already likely to attract more public and regulatory scrutiny, and comprehensive reforms expected in August will only sharpen this focus.
    • Supporting the deployment of AI and ADM in critical operations requires a risk management framework. This framework should include a systematic approach to process identification, risk assessment and the management of controls. Ad hoc approaches to risk management are not enough.
    • The scale of proposed reforms means that 'gold plated' solutions will not be practical – risk-based approaches to ongoing compliance will be required.
    • This article explores the current regulatory requirements related to AI and ADM, as well as some of the practical tools that can help you understand, assess and ultimately unlock safer automation. 

    What you need to do

    In order to support innovation and prevent disruption to critical business processes that leverage AI and ADM, risk-based identification, assessment and a controls management framework that drives transparency, accountability and security are required.

    This involves asking the right questions:

    • Can you confidently identify the automated processes within your organisation?
    • Can you explain how they handle personal information?
    • Have these processes been risk rated?
    • Have these processes been assessed for privacy compliance?
    • Have adequate controls been implemented to ensure alignment to regulatory obligations?

    Deploy the right tools in your risk management framework to drive transparency, accountability and security:

    • Transparency: Ensure privacy policies, collection notices and consent management practices are implemented in a way that reflects what is actually being done with personal information inside your organisation;
    • Accountability: Risk-informed privacy governance, visibility of AI and ADM processes, the right training delivered to the right people at the right time, documented behavioural expectations, monitoring of controls implementation; and
    • Security: AI threat models that evolve to reflect reality, and ensuring that supply chain risk management is not an external or isolated element but a central, well-integrated part of your organisation.

    AI and ADM are in the privacy spotlight

    Growing regulatory focus on AI and ADM has been a hallmark of the past year, with further regulation of AI and ADM expected to be introduced. 

    In relation to ADM, the Royal Commission into the Robodebt Scheme has been a wake-up call, prompting government agencies to carefully consider their decision-making processes, and reminding organisations of the important impacts of the use of ADM in critical services.  

    Just last week, the Attorney-General announced that changes to the Privacy Act are expected to be introduced in August 2024. A distinct focus on driving safety in use of these technologies can also be seen in the Australian Government’s interim response to the 'Safe and Responsible AI' consultation – proposed responses include:

    1. better regulatory guidance;
    2. stronger legal protections; and
    3. implementing mandatory guardrails for 'high risk' artificial intelligence.

    Australia's privacy laws do not currently include specific rules for AI and ADM. This does not mean that no rules apply. The use of these technologies continue to be regulated by the general obligations that apply to all methods of data handling. Existing requirements under the Privacy Act require organisations to handle personal information with transparency, take accountability for their actions and ensure security of personal information – the use of AI and ADM needs to be viewed with this lens.  

    However, in addition to existing requirements, the expected Privacy Act reforms will include the following requirements for substantially automated decisions that have a 'legally or similarly significant impact':

    • Transparency: making it clear what personal information is used in ADM - which includes collected, inferred or generated information.
    • Explanation: giving individuals meaningful information about automated decision-making processes.

    These requirements and the legally or similarly significant effect test are similar to existing requirements under the GDPR. 

    In addition, the government's proposed 'fair and reasonable' test will mean organisations will also need to demonstrate how they have considered the impacts on individuals and the public interest in protecting privacy against the organisation's interest in carrying out specific activities involving personal information, including implementation of AI and ADM using personal information. 

    How can you drive responsible automation and innovation? 

    Picking up on this year's Privacy Awareness Week themes of transparency, accountability and security, we share some key tools to help you improve these elements in your AI and ADM deployments. 

    AI and ADM are more widespread within your organisation than you think

    Leaders responsible for privacy risk within organisations often find themselves unaware of where automation has been implemented, and lack a structured process to identify and risk assess these processes against regulatory obligations.

    ADM and AI are commonly deployed in areas including:

    • Critical business operations such as customer eligibility assessments for products/services, demand forecasting, supply chain management, and product recommendation engines;
    • Customer behaviour analysis such as customer segmentation, profiling and targeting; and
    • Human resource functions including recruitment and employee performance analysis.

    Key Questions

    • Can you confidently identify the automated processes within your organisation?
    • Can you explain how they handle personal information?
    • Have these processes been risk rated?
    • Have these processes been assessed for privacy compliance?
    • Have adequate privacy controls been implemented to prevent misuse?

    These questions should leave you with a 'to do' list – such as implementing processes to review AI or ADM deployments that have not been risk rated or assessed.

    Improving Transparency

    The Privacy Act mandates transparency from organisations in their data handling practices and requires privacy policies and collection notices to be clear and accurate. Upcoming reforms are set to reinforce this by requiring that notices are also concise and understandable.

    Collection Notices

    Current privacy regulations require individuals to be provided with clear, targeted collection notices that explain how their data will be used as a result of specific interactions or transactions involving personal information. Without appropriate education and governance, project teams can mistakenly assume that general organisational privacy policies suffice. A robust privacy risk management framework ensures that these notices are designed effectively to enhance trust, minimise regulatory risks and are regularly updated to reflect evolving data handling practices.

    Consent Management

    As AI and ADM are integrated into critical business processes, organisations often explore innovative uses of data, which can lead to 'purpose creep'. When data cannot be de-identified, many organisations rely on consent for these secondary uses. To avoid compliance issues, it is crucial that consent management is not siloed — teams responsible for AI/ADM development must share a consistent understanding of the consent status of data subjects with other business units through the use of a single, reliable source of truth for consent, which is integrated with business rules to de-identify or destroy data where consent is withdrawn. 

    Improving Accountability

    Improving organisational accountability to ensure the risks posed by AI and ADM are manageable requires robust privacy governance structures.

    Organisations must take reasonable steps to implement practices, procedures and systems that ensure compliance with privacy obligations. This distinct requirement means that organisations without adequate privacy risk management processes in place can be in breach of their privacy obligations even if no privacy incident or data breach has occurred.

    Robust Risk Management

    Appointing a privacy risk owner in your organisation is essential to manage the privacy risks associated with AI/ADM deployments in critical business operations. It is also crucial that privacy risk management is integrated into broader risk and internal audit programs. Additionally, establishing a Privacy Management Committee enhances interdisciplinary collaboration in large organisations, facilitating a cohesive approach to privacy risk management.

    Visibility of High Risk AI and ADM Deployments

    Accountability for AI and ADM processes requires that each process be mapped, documented, risk rated, and understood. This requires risk assessments to be conducted which consider a wide set of regulatory obligations, including privacy law, competition law, intellectual property law, sector-specific regulations, contractual obligations, internal standards, stakeholder expectations, and industry specific regulations. These assessments must be updated as the processes, data, or outputs change. 

    To discover higher-risk uses of AI and ADM for assessment, focus first on:

    • Higher-risk or higher-value business-critical processes to understand how they have been or will be automated; and
    • monitor access to (and requests to access) sensitive, higher-value or higher-risk data sets (including those containing personal information).

    Without visibility into where and how high risk AI and ADM processes are deployed, organisations will struggle to meet privacy and other regulatory obligations.

    The Right Training

    Implementing digital training materials tailored for teams developing, using, or relying on the results of AI and ADM systems that handle personal information is crucial. This training should clearly explain how privacy obligations specifically relate to their work with AI and ADM systems, making these concepts easily applicable in their daily activities.

    Consider risk-focussed training or 'top up' reminders as part of project kick-offs, role induction or project milestones, and embed ongoing staff enablement in your post-deployment maintenance and compliance strategy. 

    Internal Policies, Standards and Procedures 

    Internal policies and standards lay the groundwork for fostering a culture of privacy and accountability but often lack detailed guidance for staff who deal with AI/ADM systems daily. Implementing privacy-specific Standard Operating Procedures (SOPs) that offer detailed instructions for applying privacy principles to high-risk activities is crucial for bridging this gap.

    Improving Security

    Privacy laws already require organisations to take reasonable steps to protect personal information, but coming reforms will clarify this includes both technical and organisational measures.

    The 'reasonable steps' an organisation must take to protect personal information will depend on the circumstances, and will evolve as personal information handling practices and the cyber-attack and defence environment evolves. What is considered 'reasonable' will also be informed by the proliferation of guidance, advisories, alerts and standards – key parts of Australia’s 2023-30 Cyber Security Strategy.

    Evolving Threat Models

    Organisations need to understand and properly manage the datasets that they expose to their (and their third-party provider) AI models. The benefits provided by AI and ADM deployments, such as frictionless automation, can also allow bad actors to rapidly, repeatedly and at-volume exploit process or logic errors, vulnerabilities or unexpected system behaviours.

    While it is not possible to prevent all threats, organisations can develop and evolve threat models that describe and respond to more likely and more harmful threats, informed by current trends and an understanding of the vulnerabilities and attack vectors more relevant to AI and ADM systems.

    Managing Supply Chain Risk

    AI and ADM deployments often depend on complex digital supply chains, including AI suppliers, apps, resellers, data providers, model training services, data centres, and secure data transmission services. Regulators and customers have been clear — the reputational and regulatory risks associated with third-party providers cannot be outsourced. It is crucial to assess and manage these risks from the beginning.

    Effective due diligence should be conducted early in the tendering process to evaluate the privacy and cybersecurity maturity of service providers, and their ability to handle risks and meet regulatory requirements. It will be important for customers and suppliers to understand the specific Australian context, because fragmented approaches to regulation across various jurisdictions will mean it will not be sufficient for suppliers to simply attest to compliance in another jurisdiction. 

    Organisations must integrate their critical suppliers into their operational resilience, risk management, privacy management, and incident response plans, ensuring the clear delineation of responsibilities.

    What Can I Do Today?

    To learn more about what you can do today to drive transparency, accountability and security in AI and ADM, meet regulatory expectations, and prepare for the impending Privacy Act reforms, please reach out to the key contacts below.

    Want to know more?

    Authors: Geoff McGrath (Partner), Chris Baker (Partner, Risk Advisory), John Macpherson (Partner, Risk Advisory), John Moore (Director, Risk Advisory), Leon Franklin (Director, Risk Advisory), Andrew Hilton (Expertise Counsel), Michael Turner (Executive, Ashurst Risk Advisory) and Patil Sevagian, (Specialist, Ashurst Risk Advisory). 

    This publication is a joint publication from Ashurst Australia and Ashurst Risk Advisory Pty Ltd, which are part of the Ashurst Group.

    The Ashurst Group comprises Ashurst LLP, Ashurst Australia and their respective affiliates (including independent local partnerships, companies or other entities) which are authorised to use the name "Ashurst" or describe themselves as being affiliated with Ashurst. Some members of the Ashurst Group are limited liability entities.

    Ashurst Australia (ABN 75 304 286 095) is a general partnership constituted under the laws of the Australian Capital Territory.

    Ashurst Risk Advisory Pty Ltd is a proprietary company registered in Australia and trading under ABN 74 996 309 133.

    The services provided by Ashurst Risk Advisory Pty Ltd do not constitute legal services or legal advice, and are not provided by Australian legal practitioners in that capacity. The laws and regulations which govern the provision of legal services in the relevant jurisdiction do not apply to the provision of non-legal services.

    For more information about the Ashurst Group, which Ashurst Group entity operates in a particular country and the services offered, please visit

    This material is current as at 7 May 2024 but does not take into account any developments to the law after that date. It is not intended to be a comprehensive review of all developments in the law and in practice, or to cover all aspects of those referred to, and does not constitute legal advice. The information provided is general in nature, and does not take into account and is not intended to apply to any specific issues or circumstances. Readers should take independent legal advice. No part of this publication may be reproduced by any process without prior written permission from Ashurst. While we use reasonable skill and care in the preparation of this material, we accept no liability for use of and reliance upon it by any person.

    The information provided is not intended to be a comprehensive review of all developments in the law and practice, or to cover all aspects of those referred to.
    Readers should take legal advice before applying it to specific issues or transactions.


    Stay ahead with our business insights, updates and podcasts

    Sign-up to select your areas of interest