Podcasts

World@Work: AI in the Workplace: some practical challenges

22 October 2025

In this episode, we discuss several nations’ contrasting approaches to the regulation of AI. In particular, we shine a light on practical challenges for employers, including how they use AI in recruitment and how AI is used to monitor employee performance.

Ashurst’s Andreas Mauroschat hosts the podcast from Germany. He’s joined by his colleagues Ruth Buchanan in the UK, Trent Sebbens in Australia, and Clarence Ding in Singapore. Together, they discuss how their respective jurisdictions are seeking to strike a balance between fostering innovation, maximising productivity, and considering their workforces.

To listen to this and subscribe to future episodes, search for “Ashurst Legal Outlook” on Apple Podcasts, Spotify or your favourite podcast player. To find out more about the full range of Ashurst podcasts, visit ashurst.com/podcasts.

The information provided is not intended to be a comprehensive review of all developments in the law and practice, or to cover all aspects of those referred to. Listeners should take legal advice before applying it to specific issues or transactions.

Transcript

Andreas

Hello and welcome to Ashurst Legal Outlook and the latest episode in our World @ Work series. My name is Andreas Mauroschat and I'm a partner in Ashurst’s employment team in Frankfurt.

In this episode, we'll be diving into a topic that every HR professional, in house lawyer and business leader is talking about – the rapid rise of artificial intelligence in the workplace. As you know, this area is changing fast, with lots of moving parts.

So over the next half hour, we'll explore three practical challenges:

  • The key issues for employers when using AI in recruitment and employee performance monitoring
  • How AI connects with ever changing data protection rules
  • And finally, and most importantly, the specific steps which organisations should take now to prepare for and comply with new AI regulations.

To help us unpack these topics, I'm joined today by three of my partners from around the world. Ruth Buchanan from our London office. Hello Ruth!

Ruth
Hi Andreas. It's lovely to be here.

Andreas 
Trent Sebbens from Sydney. Hi Trent!

Trent
Hi Andreas, great to be with you.

Andreas
And a special hello to Clarence Ding, who recently joined our Singapore practice. Welcome Clarence!

Clarence
Hi Andreas. It's a pleasure to be here together.

Andreas

Together we'll look at what's happening across Europe, the UK, Australia and Asia Pacific, and we'll share some practical tips for employers wherever you're listening from.

Let's start here in Germany and more broadly, in the EU, where the first rules of the AI Act started to apply from February 2025. Now the Act uses a risk-based approach, and it classifies most HR-related AI tools. Think of things like algorithmic CV screening, automated shortlisting, psychometric testing or productivity tracking software; all of these are classified as high-risk applications.

Now for employers, that means five main obligations.

1. Obligation number one: risk assessments: before using an AI system you need to carry out and keep up to date a fundamental rights impact assessment. This assessment covers issues like bias, discrimination and data privacy risks. The importance of this obligation is highlighted by the case of a major German retailer who was fined in 2023 after its AI-powered recruitment tool, was found to systematically reject female candidates for warehouse roles due to a historical data bias. This was caused because the company had failed to conduct a proper risk assessment.

2. Obligation number two: transparency and documentation: providers of AI systems must keep detailed technical records, but the employers using these systems also need to keep logs showing how the system is actually used in practice. Why that is important is shown in a case from the Netherlands: a municipality was required to disclose the algorithmic logic behind its welfare fraud detection tool after a court had ruled that black box systems cannot be used to make decisions affecting citizens' rights without transparency.

3. Obligation number three: human oversight: high risk AI can’t just run on autopilot. There must be meaningful and documented human review of each important employment decision. An example to reinforce this need for human oversight: a French bank implemented an AI system to monitor employee productivity after an employee challenged a disciplinary action. The bank was required to submit specific evidence of how specifically the human review was secured in the decision-making process.

4. Obligation number four: transparency to individuals: all candidates and employees must be clearly told when an AI system is involved, and they must receive information on the logic, significance and the likely consequences of the automated processing. Now to put this in practical context: in Italy, a food delivery platform was ordered to expressly notify its riders whenever their performance was being evaluated by an algorithm following protests and legal actions by trade unions.

5. Finally, obligation number five: data quality and non-discrimination: data sets used in an AI algorithm must be relevant, representative, free of errors and complete. That is a “big ask” since old HR data may often reflect old biases. Practical example: a Spanish tech company had to retrain its AI recruitment tool after discovering that it consistently favoured graduates from certain universities due to incomplete training data.

If you don't comply with these obligations, you may face hefty fines of up to 35 million euros, or 7% of annual turnover. Add to that the existing sanction rules under the GDPR and Germany's Federal Data Protection Act, and you can see that using AI and HR is already a bit of a legal minefield.

Now let's turn to the UK. Ruth, we're seeing white papers on pro-innovation regulation of AI. We're seeing a consultation on automated decision-making under the UK GDPR. And the Equality and Human Rights Commission is warning about algorithmic bias. So from a practical standpoint, what do you see as the key compliance questions for UK employers using AI in recruiting or performance management?

Ruth 
Thanks Andreas. I think starting with recruitment, the key employment compliance factor for UK employers is to take all possible actions to ensure that bias and unconscious discrimination is reduced with any AI tools that are being used. For example, organisations should carefully review the data which is being used to train their AI recruitment tools to ensure that the data is current and that any existing biases are eliminated. Human oversight is also critical. It's important to ensure that someone in the organisation is trained to carry out the necessary assessment to reverse engineer any outcomes produced in order to identify and correct any bias that does creep in.

Also, I think where third-party providers are engaged to provide an employer's recruitment tools, it’s really crucial for employers to take the time to understand the provider's contractual duties in relation to the AI tools. It's important to ensure that the tools are appropriately trained on non-biased data and, in contract negotiations, to ask for assurances about the safeguards that have been implemented to reduce that bias and possibly discrimination.

Andreas
The AI Act even goes a bit further by requiring employers to conduct the fundamental rights impact assessment before they run high risk AI assessments in HR, so employers must even proactively document and address risk of bias, discrimination and data privacy before an AI tool is used. So why are all these checks and balances so important for UK employers, Ruth?

Ruth 
Well, if AI produces a discriminatory outcome, either in recruitment or indeed any performance management processes, the individual could bring a claim for discrimination. And in order to bring this type of claim in the UK, there's no service requirement that's needed by the individual, and also if the claim is successful, there's potentially an uncapped financial liability. On top of that, there might also be a separate injury to feelings award.

Andreas 
So it appears that the stakes are pretty high for employers in the UK. Apart from discrimination, are there any further compliance considerations when managing performance?

Ruth 
I think employers just always have to be mindful of their duty to make employment decisions in a lawful, rational way and in good faith, and that's part of their common law duty here of trust and confidence. And in practice, that means that where AI tools are making or assisting in performance management decisions, perhaps by reviewing absence data, employers have to be prepared to be transparent and fully explain the way in which the AI has impacted any determinations made so as to satisfy an employee's rights to lawful and rational decisions being made in good faith.

I mean, ideally, AI should never be making any performance-related decisions. It can assist and provide information and data to help the appropriate person make the final informed decision.

Andreas 
Thanks very much, Ruth. Now let's cross over to AsiaPac. Clarence, Singapore has been positioning itself as an AI hub, and notably, it released the AI Verify Framework, as well as an updated Personal Data Protection Act (PDPA) guidance on automated decision-making. What practical implications are you seeing for employers? In particular, how strict is the consent requirement under the PDPA when employers use AI for employee monitoring? And is there a trend towards mandatory external audits of AI tools similar to the EU's approach?

Clarence 
You're absolutely right that Singapore is positioning itself as an AI Hub. Just a few days ago, on the sixth of October, Singapore's Digital Development and Information Minister gave an interview. He said that AI will likely be key to strengthening Singapore's competitive advantage as a leading financial hub. The government's priorities are therefore to encourage the adoption of AI for work processes, as well as improving guardrails against risks associated with the technology.

Andreas 
That's interesting Clarence, and presumably this will lead to the regulatory approach in Singapore being less onerous on employers than in the EU?

Clarence 
Yes, that's correct, Andreas. Singapore has always taken a very “light touch” approach to regulation. This is the case with financial regulation and AI regulation looks to be no different. Singapore released a model AI governance framework just last year. This governance framework essentially attempts to foster a trusted AI ecosystem through self-regulation. There are several key considerations with this:

1. Firstly, accountability: it encourages all stakeholders, that includes policy makers, the industry itself, the research community, as well as the broader public, to be responsible in how AI is developed and deployed.

2. Secondly, by building trust through third party testing and assurance, akin to what is already being done in more developed industries such as finance and healthcare.

3. Third, establishing structures for incident reporting and remediation

4. And fourth, democratising AI access by ensuring AI systems are developed sustainably for good and workers are upskilled to harness the technology.

Now, given the voluntary nature of the adoption of this governance framework, as well as Singapore's light touch regime, the implication for employers has not been very significant. In most cases, the impact has been greatest for multinational corporations, particularly those who also fall within the jurisdiction of the EU AI Act, and who are therefore bound to apply those far more stringent standards across the board, for consistency and compliance.

Andreas 
In fact, we observe that many multinational employers operating in the EU have chosen to apply the EU AI Act's high-risk requirements globally, including in AsiaPac to streamline their compliance standards and to avoid regulatory gaps. So even in a jurisdiction with light regulation, the EU standards may increasingly shape global HR practices. So Clarence, what's happening in relation to data privacy?

Clarence 
That's a great question, given the close nexus between AI regulation and data privacy. It shouldn't come as a surprise to note that the Singapore government has also released advisory guidelines on the use of personal data in AI systems. Although some had anticipated that the Data Privacy Commissioner might take a stricter approach to the consent requirement for the collection, use and disclosure of personal data for AI purposes – this has not actually been the case. The advisory guidelines have, in fact, simply reaffirmed the general position as regards an organisation's right to collect, use and disclose personal data for AI purposes so long as the data subjects have been notified of the intended purpose and the consent is obtained.

Andreas 
So is consent always obtained?

Clarence 
Actually, not really. In some cases, you don't even need consent. That's particularly if the personal data is being used for certain purposes, such as research. So you would have R&D into a new AI system, for example, or for business improvement purposes. That's the situation where you are testing the AI system for any bias assessment. We don't actually foresee Singapore following the EU's trend towards having any mandatory audits of any AI tools anytime soon, either.

Andreas 
Thanks very much. Clarence, now over to Australia. Trent, the federal government's Privacy Act review squarely addresses automated decision-making, and SafeWork regulators are starting to look at psychosocial risks linked to algorithmic surveillance. Could you walk us through how AI is being addressed in Australia?

Trent
Yes, thanks, Andreas. In Australia, there's no standalone legislation that specifically regulates the use of AI or automation. And to date, Australia has responded to potential AI risks through voluntary measures such as the AI ethics principles that were published back in 2019 as well as the voluntary AI safety standard, published a bit more recently in August of last year.

More recently though, Australian privacy laws have been subject of an extensive review, and there's been a recent tranche of reforms which have introduced new transparency obligations regarding automated decision-making ( or ADM), including a requirement for entities to include information about the kinds of personal information used and the types of decisions made in automated decision-making in their privacy policies. Those new ADM rules capture AI that uses personal information, so those transparency rules can be seen as one of the first cabs off the rank for an ongoing focus on AI and artificial intelligence regulation.

There is also a second tranche of amendments proposed to our Privacy Act here in Australia, and that's currently being consulted on by the government. It's likely to cover a broader spectrum of issues, including a range of measures to reform and enhance protections in relation to the use of AI, including introducing privacy impact assessments for high-risk activities such as ADM, and also the need to explain automated decisions. Also perhaps a requirement that data activities be fair and reasonable regardless of whether or not consent has been obtained, and also changes to protect more data within the definition of personal information. So that's likely really to impact the training and use of AI models.

Andreas
And what has been the view on employment law, specifically Trent?

Trent
Existing employment labour relations and safety laws will already apply to the adoption of AI in the workplace, because those laws are really agnostic to the way in which they apply. So as an example, they'll apply to decision-making which is aided by AI. However, it has been exercising in the minds of employers trade unions, as well as government agencies such as the Australian Human Rights Commission and also governments across Australia, whether or not there should be additional laws or any changes to those laws to meet the challenges posed by the adoption of AI.

Andreas
And where has the government landed, Trent?

Trent
So at the federal level, the Australian government's taking a pretty cautious approach to AI in the workplace. They appear to be interested to strike a balance between protecting workers' rights and also allowing businesses to innovate and grow.

They had, pretty recently (in August) an economic reform round table in which they also discussed AI and its features in the workplace. So for now, I would say the government is focused on how AI can be used to boost productivity and growth in Australia, although it will no doubt be looking at the potential impacts on AI's adoption and its impact upon the workforce across our economy.

The peak labour council here in Australia, the Australian Council of Trade Unions (ACTU), has been pushing for strong regulations, including what it's styling as mandatory agreements called “AI implementation agreements”. Now they would apply between employers and employees and cover things like consultation with workers before any AI is introduced, as well as guaranteeing job security and retraining for workers and preventing AI from being used to unfairly monitor or pressure workers. That union, the ACTU is also calling for a national authority to oversee those laws. So they want those agreements to be legally enforceable and for that oversight to be provided by a new agency.

Andreas
What is the approach of other interested parties?

Trent
From an employer perspective, employer groups have been pretty cautious. A bit aligned with the government, they worry that too much regulation could stifle innovation and harm Australian businesses and their global competitiveness. They've emphasised the need for flexibility and innovation, and have suggested primarily there'd be a focus on education and training to help workers transition into new roles as AI changes the job market, and that view has really been supported by others, including the Tech Council of Australia.

The government has tasked our independent Productivity Commission to undertake a review of AI, and it's analysed that issue in a recent report on artificial intelligence, emphasising the need for regulation to ensure fair outcomes for workers and workplaces, with the Commissioner that led that report saying that regulation is crucial to balance innovation with protecting workers’ rights and ensuring fair outcomes.

So I think it's fair to say Andreas, we're still at a pretty early stage of regulatory reform, and the view of employment laws in Australia. Although we have seen, in recent times, the Australian government being prepared to task our federal employment tribunal, the Fair Work Commission, with new powers to deal with changed circumstances. So for example, it being tasked or given new powers to deal with disputes from the energy transition that we're seeing, as well as changes to the economic landscape, such as new rights and term setting regimes for that commission for the gig economy.

Andreas
Trent, can you also tell us a bit about the employer specific duty of care-style obligation?

Trent
Yeah, in the safety space, what we are seeing is in one of our major states, New South Wales, that parliament considering some reforms to work health and safety laws. The changes to legislation that are currently being considered would be to impose positive duty on those operating digital work platforms, that includes work being organised by an algorithm or by artificial intelligence or other automation through a platform type system or through software.

Now, the proposed duty would ensure that the allocation of work by the digital work system would be without risk to health and safety of any person, with specific risks being called out in relation to excessive or unreasonable workloads, as well as the use of excessive or unreasonable metrics or monitoring and surveillance and that not be used to track in that unreasonable or excessive way the performance of workers at work. And then also, lastly, not being used in relation to introducing discriminatory practices or decision-making.

Andreas
Thank you. Trent, that's really interesting.

I've really enjoyed hearing how different jurisdictions are dealing with the use of AI in the workplace. As we're almost out of time. Let me sum up what we learned today in four key takeaways for employers.

1. Number one: map your AI use cases now: you can't comply if you don't know where AI is being used. Create an inventory, classify risk levels and identify any high risk HR applications.

2. Takeaway number two: build a cross functional governance framework: legal, HR, IT, data protection, and health and safety teams all need to work together to document who is responsible for what and how decisions in relation to AI are made.

3. Takeaway number three: prioritise transparency and fairness: whether under the EU AI Act, the UK's proposed code, Singapore's PDPA or Australia's privacy reforms, the common theme is explainability, human review and proactive bias checks.

4. Takeaway number four: inform and educate your employees: make sure that employees are regularly told about and trained on the right way to use AI at work, including the data protection rules and the risks of using private or unauthorised AI tools. Clear guidance and ongoing education are absolutely essential to avoid accidental data breaches or misuse of systems.

If you can show you've got those four basics in place, you'll be in a much stronger position when regulators or your employees start asking tough questions.

Unfortunately, we're out of time now on a subject that we could easily talk about for hours. Many thanks to Ruth, Clarence and Trent for sharing their insights and to you – our audience – for listening. We hope you found this episode informative and enjoyable.

If you liked this episode, you can hear more Ashurst podcasts (including more from our World @ Work series) by visiting ashurst.com/podcasts. If you don't want to miss future episodes, you can subscribe to Ashurst Legal Outlook on Apple Podcasts, Spotify, or your favourite podcast platform. While you're there, please feel free to leave us a rating or review. But for now, I'm Andreas Mauroschat, and on behalf of my co-panellists, thanks again for listening and goodbye

Keep up to date

Listen to our podcasts on Apple Podcasts or Spotify, so you can take us on the go. Sign up to receive the latest legal developments, insights and news from Ashurst.

The information provided is not intended to be a comprehensive review of all developments in the law and practice, or to cover all aspects of those referred to. Listeners should take legal advice before applying it to specific issues or transactions.