Thought Leadership
Navigating the legal landscape: AI in Australia
A legal deep dive into consumer, privacy, employment, misinformation / disinformation and foreign interference laws.

At a time of rapid innovation, countries around the globe are making significant investments in the research and development of artificial intelligence (AI) technologies. Against this backdrop, businesses are seeking to unlock the benefits and value of AI, while governments grapple with how best to regulate risk without stifling innovation and investment.
Ashurst’s Navigating the legal landscape: AI in Australia paper is sponsored by Google and examines how Australia’s existing legal frameworks apply, or are likely to apply, to the rapidly evolving field of AI. To illustrate this, our paper uses hypothetical case studies involving AI-enabled products and services, such as chatbots, smart home devices, recruitment tools and more.
Our paper offers valuable insights for policymakers, AI developers and businesses using AI, and focuses specifically on the areas of consumer, privacy, employment, misinformation / disinformation and foreign interference law.
Navigating the legal landscape: AI in Australia
A legal deep dive into consumer, privacy, employment, misinformation/disinformation and foreign interference laws.
Read the full paper [PDF 5.74MB]Hypothetical Case studies
-
Case study #1
Deployment of specific AI chatbots
Read moreCase study #1
Consumer law | Privacy law
A global airline launches an AI chatbot to assist with bookings, requiring users to accept standard form terms excluding liability, without warnings about possible erroneous outputs.
A customer relies on the chatbot’s incorrect advice about a business travel discount. The chatbot uses personal data for personalised pricing, resulting in higher fares than those found elsewhere.
The customer is later targeted by a phishing scam that exploited data collected by the chatbot, leading to financial loss.
Has the airline breached its obligations under consumer law and privacy law? What are the consequences? What are the customer’s rights of redress?
-
Case study #2
AI-enabled smart home devices
Read moreCase study #2
Consumer law | Privacy law
A smart home device manufacturer sells an AI-powered smart refrigerator, claiming its AI capabilities could manage groceries, order food, learn user preferences and reduce energy bills.
The product malfunctions, over-ordering groceries and mismanaging temperature, causing food spoilage and higher bills. The company exaggerated the product’s AI capabilities.
The refrigerator secretly records video and audio, collecting sensitive data on its owner and their family members, including health risks, without disclosure.
This data is sold for targeted marketing of weight loss surgeries to young people.Has the manufacturer breached its obligations under consumer law and privacy law? What are the consequences? What are the affected individuals’ rights of redress?
-
Case study #3
AI-powered recruitment tools
Read moreCase study #3
Privacy law | Employment law
A hair care retail chain uses an AI recruitment tool to hire a new Marketing Manager. The tool is trained on the resumes of current employees, who were mostly men, especially in managerial roles.
Out of 100 applicants, the AI shortlists 10 candidates—nine men and one woman—without human oversight. Video interviews are also assessed solely by the AI, which ranked candidates mostly by age, with the female and a disabled male candidate ranked lowest.
The AI tool provided no reasoning for its choices, and the top-ranked male candidate is offered the job.
Has the business breached its obligations under privacy law, the Fair Work Act or anti-discrimination regimes? What are the consequences? What are the affected individuals’ rights of redress? -
Case study #4
AI in human resources
Read moreCase study #4
Privacy law | Employment law
An online retailer uses AI software to monitor warehouse employees’ productivity and reliability, adjusting shift offers based on these scores.
A long-term employee, who developed a physical disability after an accident, saw her productivity score drop due to needing regular breaks, resulting in fewer shifts despite informing her supervisor.
Another employee, who had to occasionally arrive late or leave early to care for his sick child, also saw his reliability score decrease, leading to reduced shift offers.
In both cases, the AI system did not account for individual circumstances, and supervisors had no control over the automated rostering decisions.
Has the business breached its obligations under privacy law, the Fair Work Act or anti-discrimination regimes? What are the consequences? What are the employees’ rights of redress? -
Case study #5
AI Deepfakes
Read moreCase study #5
Online safety laws and misinformation | Consumer law | Criminal law | Defamation law
A famous tennis player finds himself at the centre of a controversy when a viral video appears to show him endorsing a rival sports tennis racket, despite an exclusive sponsorship with another company.
The video, seemingly filmed at a press conference, is widely shared and used in advertising by the rival brand, leading to public confusion and a surge in interest.
The athlete’s management team discovers the video is a highly convincing AI-generated deepfake. Despite issuing a public denial and reaffirming his existing sponsorship, the incident damages his reputation.
The situation escalates when explicit deepfake videos featuring the athlete and branded merchandise also surface online.
How might current laws apply to this scenario? Can legitimate operators be implicated for the conduct of bad actors? What are the affected individual’s rights of redress? -
Case study #6
AI Research Assistants
Read moreCase study #6
Professional ethics
A lawyer uses an AI chatbot to gather legal research for a case, receiving extracts and links to journal articles that appear to support her arguments.
Relying on these materials, she cites them in court submissions. However, when the court attempts to verify the sources, it is revealed that the articles do not exist and were fabricated by the AI.
The court questions the lawyer’s diligence and legal professionalism, places little weight on her submissions and says the conduct should be considered by the legal professional regulatory body.
The judge’s adverse comments about her conduct are published in the judgment, harming her reputation and damaging her client’s case.
What consequences might flow from the irresponsible use of AI by professionals? What should lawyers, in particular, be aware of? -
Case study #7
AI used by Foreign States
Read moreCase study #7
Foreign interference | Foreign influence
In the lead-up to a national election, a social media platform observes an unprecedented surge in new accounts, many lacking typical user traits and claiming links to charities.
These accounts, using AI-generated profile images, coordinate to spread and amplify misinformation about immigration policies, evading standard bot detection software.
The bot activity is ultimately traced to a foreign state government aiming to influence public opinion by flooding feeds with misleading, highly engaging content. Genuine discourse is overwhelmed, reducing information quality and visibility of authentic users.
Following the election, most bot accounts delete their content and deactivate.
Have Australia’s foreign interference and foreign influence laws been breached? Does the scenario raise unique issues for investigators? -
Case study #8
AI Chatbot use by small businesses
Read moreCase study #8
Consumer Law
A small business owner begins using a free AI chatbot, Unreal-IQ, initially for personal tasks and later for business purposes, including drafting marketing materials and seeking business advice.
Despite a prominent disclaimer that Unreal-IQ is experimental and can make mistakes, the owner relies on its responses to tax and refund queries.
Acting on the chatbot’s advice, she claims a non-deductible business suit as a tax expense, resulting in an ATO audit and penalty, and denies a customer refund, leading to a formal complaint to the State Fair trading body and reputational harm.
The owner believed the chatbot provided reliable information about Australian law for small business.
Has the AI chatbot deployer breached consumer protection laws? Has the small business breached consumer protection laws? What are the consequences?
Navigating the legal landscape: AI in Australia
A legal deep dive into consumer, privacy, employment, misinformation/disinformation and foreign interference laws.
Read the full paper [PDF 5.74MB]
Interested in learning more?
Sign up to receive the latest insights and updates from Ashurst, and be sure to add 'artificial intelligence' as a topic of interest.
Sign-upKey Contacts
The information provided is not intended to be a comprehensive review of all developments in the law and practice, or to cover all aspects of those referred to.
Readers should take legal advice before applying it to specific issues or transactions.