Legal Outlook Podcast 2

Legal Outlook podcast 2: transcript

The PDF server is offline. Please try after sometime.
  • My Documents

    accessyourdocument

    Material personally selected by your relationship manager for your interest.

  • My Bookmarks

    bookmarkpublications

    Access all of the content that you have previously selected to bookmark.

  • Get started

    getstaretedkey

    ​Scroll through these slides to access the personalised features of your Dashboard.

  • My Suggested Reading

    personaliseyourashurtsexperience

    A virtual library of regularly posted insights and legal updates based on your selected preferences.

Get Started

Transcript



Mark Bradley:
Hello, and welcome back to Ashurst Legal Outlook. This episode is the second in a series dedicated to artificial intelligence. In this series, we'll bring you global perspectives along with local Australian regulatory examples as we explore and unlock the mysteries of AI and its legal and business implications. Today, we're speaking with Tae Royle, the head of digital products APAC [inaudible 00:00:25] advanced digital. We'll discuss the ethics behind AI decision [inaudible 00:00:30] AI informed decision-making is being used more and more in everyday life, including in the delivery of government services, justice and policing, entertainment, employment and banking, and that's raising ethical, moral, and legal questions about how we protect human rights. And the individuals subject to AI powered decisions. Join me now, as we explore these questions and more on Ashurst Legal Outlook. Welcome back, Tae. In the last session, we covered an introduction to AI, and it's legal context. What I want to do today is really drill down into the concept of AI powered decision making. My first question is what is it?

Tae Royle:
When we talk about AI decision-making, we're talking about a decision-making process that relies in part or in whole on artificial intelligence or machine learning. I'll tend to use the terms, AI and machine learning interchangeably. At its heart what we've got is we've got a mathematical model or an algorithm, and the computer learns to recognize recurrent patterns in those models. And what you do is you give the computer a piece of information, which it ingests and the computer attempts to make a prediction based on that piece of information having regard to its models. And as Yogi Berra said, it's actually really hard to make predictions particularly about the future. And that's one of the hard things about using AI for decision-making is you're really trying to decide what happens in the future based on limited information.

Mark Bradley:
So just from that description, you can see that potential power of AI tools and legal practice and commercial life generally. And we see them increasingly in the work we do, for example, in document review for discovery and contract analysis and a whole range of other areas. You can also see, I think, that capacity for learning and prediction. Loss poses incredible risks, if the algorithm or the AI gets it wrong. What's been your experience there? And what do you say is the key risks?

Tae Royle:
I think there are ways to manage if AI were to make mistakes. I think there's ways we can potentially mitigate that. It's more ensuring that we have those appropriate procedures in place. Really the issue, I think the most egregious areas are where you have AI that has a real and measurable impact on people. And the person's responsible for running that AI, accord the AI an unwarranted deference, or don't appropriately supervise it. I saw an example in the press a few weeks back, and it's a quote from a senior Victorian police officer that I want to share with you.

And the individual says we can run the tool now, and it will tell us like the kid might be 15 years old, it tells us how many crimes he's going to commit before he's 21 based on that, and it's 95% accuracy. The senior officer said, it's been tested. So this is an AI tool that's intended to help manage what are called core youth offenders in Dandenong, which is a culturally diverse suburb in Victoria. And I find this quote, as a technologist, deeply troubling on a number of levels, but really my core concern, what it boils down to, is there's seems to be a fundamental misunderstanding by the senior officer of the limitations of the mathematical models that underlie the AI.

You need to understand that when you're making predictions the further into the future you make those predictions, the less accurate they become. Five years out is a very long time when you're dealing with our youth. And not only that, the mere act of making a prediction can itself affect the future. So I think we need to be really incredibly careful about how we're making predictions about future behaviors of individuals, particularly when we're suggesting to those individuals that they will, in some way become hardened criminals as they grow up. I think there's a real concern that some people who are working with artificial intelligence perhaps pay a bit too much attention to the marketing materials that come with those tool sets. And as a result, they accord too much deference to those tools. And it's really important that we take these predictions in context and apply our own level of human judgment to them.

Mark Bradley:
So, Tae, what are we seeing in the US around machine learning and discrimination?

Tae Royle:
I want to explore another issue, which I find quite troubling. And this is in the social media space. When we're, there's advertising on social media applications. Red lining was a historic practice in the United States. And the effect of red lining was to result in a systematic denial of the provision of goods or services, often financial services, to disadvantaged or minority communities. What actually happened in practice was bankers would draw big red circles on maps, and they would either exclude lending to minority communities who were entering, wanted to enter into those areas. Or alternatively, the affected individuals might only be able to obtain a mortgage if they were to purchase a home within the circles. And it had the effect of spatially separating communities. Now this practice was outlawed under the Fair Housing Act in 1968 in the United States.

But what we're starting to see with the growth of advertising on social media. One of the great benefits of using a social media platform, if you're an advertiser, is the ability to micro target the individuals to whom you wish to offer services. And you can classify them on very, very narrow criteria. But the problem with that is that can be a form of red lining. And so what we're seeing is we're seeing a slew of cases being brought in the United States, including some class actions. And these are being systematically settled by the social media giants, because they know that there are real problems in relation to what is effectively their core business model, which of course is all driven by artificial intelligence and machine learning. So we need to be very careful and about how these business models are applied so that we can avoid some of these discriminatory effects.

Mark Bradley:
I think what you've done really effectively there is to outline the real ethical issues that arise from the use of AI powered decision-making, and clearly that does create, I think, a need for the law to respond, but at the moment, really throughout the world, but particularly in Australia, there's no specific regulatory framework in place to deal with these kinds of issues as they arise in relation to AI powered decision-making. Now that's not to say, of course, that the law doesn't respond in some ways. We need to consider these issues through the lens of privacy law, anti-discrimination law, and in particular circumstances. There may be contract issues, unconscionable conduct issues, perhaps misleading or deceptive conduct issues, and of course, if you're dealing with a government AI, there may be administrative law or even constitutional issues that can arise. So there is potential for the law to respond to these issues, but it's very much underdeveloped and uncertain at this point.

So what have we seen so far from governments? Well, there's been a consistent push towards the development of soft law or non-binding frameworks, statements of principles, which businesses and others can use in developing their own AI technology. We've seen that, for example, in the EU, the UK, and Singapore, I think it's really helpful to understand where that work is heading. The non-binding frameworks don't impose enforceable legal obligations, but they indicate the likely direction of future regulation in the area. And following them helps to mitigate and avoid the kind of ethical issues and risks, which Tae has already outlined, which can, of course, in some cases lead to issues under existing legal frameworks. So perhaps just to give you one example of one of those frameworks, the EU has been developing the concept of what it calls a trustworthy AI. And this has a number of dimensions, which I think are really helpful in unpacking the ethical issues that arise from AI powered decision-making.

The first is human agency and oversight. So the AI can't be left to operate autonomously. There must be some human agent with responsibility for overseeing its operation and making sure that it's operating properly. Secondly, it must be technically robust and safe, so controls on its desire. Thirdly, attention needs to be paid to privacy and data governance. So making sure that the use of AI doesn't expose affected individuals to the loss of personal data to third parties. Next, and this one, I think, is really important. There must be transparency about the decision making. And that seems to have three dimensions. First, an ability for an effected individual to understand what data is being used by the AI. Secondly, to understand the result that's been reached as a result of the process. And thirdly, what consequences, in a practical sense, they had for the individual. Next the AI must operate in a non-discriminatory and fair way.

Next, and again, I think this one will be an increasingly important focus of the debate about the regulation of AI. The AI must be used for a proper purpose, and you only have to think about things like the use of AI to micro target political advertising at particular individuals to see that while AI powered decision-making is incredibly powerful, it also creates potentially real costs for society. And there may need to be some constraints on its use in particular high-risk areas.

And finally, this is really at the heart of everything, I think in terms of the regulatory debate, there must be accountability. So there needs to be some human or corporate actor that ultimately has accountability and liability, if the AI operates in an appropriate way. What I'd like to do now, Tae, is turn towards legal ethics and our duties as lawyers in relation to the use of AI decision-making. But perhaps before we do that, I'd be interested in your reflections on how lawyers are using AI tools and the kinds of issues that might arise from that.

Tae Royle:
By and large, the way lawyers use AI tools today tends to be quite controlled. We use the tools to speed up the standard review processes, extract large amounts of information from thousands of documents, but mostly they're subject to a human layer of review, a human in the loop, before they go out to our clients. But I think as lawyers, we do have significant ethical considerations, and duties that we need to consider. I think we've got a duty to ensure that AI is explainable so that others can understand the consequences. Would you be able to comment a bit further in concrete terms about what you see as the ethical issues facing lawyers?

Mark Bradley:
I mean, I think in terms of the way we're using AI at the moment, you're right that a lot of it's sort of focused on document selection and review. Increasingly though, we are seeing the use of protective or selection technology. So the use of AI to identify lawyers consider which forum to bring a dispute in, whether it's the federal court or the state court, matters of that kind, and that's a thriving industry in the United States. So I think in thinking about our legal ethics duties, it's important to think about both aspects and really to think about five core duties. So the duty of competence, of supervision, of client communication, confidentiality, and finally duties in relation to anti-discrimination. But I think really they boil down to a few basic concepts. And the first is the one you've already mentioned the need to understand the possibilities of AI and communicate with our clients about that.

And that's something that the ABA, the American Bar Association, emphasizes and it's commentary on the duty of competence in the United States, but which I think applies in Australia and a number of other jurisdictions without being expressly stated. Increasingly we see that the use of AI tools can save our external or internal clients' money and time, and in many cases generate a better result, than more traditional human methods. And so it's important that we're familiar with the options there, and we're able to communicate about that with our clients and get their informed consent, whether it's to use the AI tool or to use a more traditional method.

Tae Royle:
Can you really hold a lawyer responsible for the decisions made by machine? Because the machine just draws off information from decided court cases and in a sense it's acting independently and impartially. So why would we say that a lawyer is somehow liable for a decision made by a computer?

Mark Bradley:
I think ultimately, Tae, it comes back to the point that we, as lawyers, or in business are employing and using the AI tools to have benefit and in doing so, we assume a degree of responsibility for... An AI tool isn't now, I suspect will never be, completely autonomous in the sense that it can operate without any human direction or oversight. And ultimately it's important that someone is responsible for the operation of the AI to ensure that there is a human properly incentivized to make sure that it works in an ethical and appropriate way. And, of course, in many cases, it's not a binary outcome. You're either use the AI or you use sort of traditional human processes to reach a decision or undertake a document review process. Very often there is a level of human involvement in these decisions. And I think part of the ethical obligations of lawyers and it really flows from the duties of competence and supervision is to think about how we, as lawyers, need to make sure we're involved in processes to secure an ethical outcome.

Tae Royle:
Mark, I'd be interested in your thoughts about how we can approach data collection and data management in an ethical manner. What steps should we be taking as lawyers? And what do we need to think about?

Mark Bradley:
Fundamentally, as lawyers, we have a duty of confidence, which is fundamental in all matters, not just matters involving AI. I think the difference with AI is that it's very hungry for data. So often you'll hold much more electronic data than we might in a traditional process. And very often the data will be hosted by third parties rather than within the client or by a law firm, at least for a period of time. So that poses risks and we have seen recent ransomware threats and attacks against third party service providers in the legal technology area. So it's something to be very mindful on. Clearly having appropriate contractual protections in place, to make sure that those third party providers have their own sophisticated data governance protections in place is important. But I think as the issue develops and as more and more data is absorbed by our processes, there'll be an increasing focus on whether we as lawyers have satisfied ourselves, not just that the contractual protections are in place, but that in practice appropriate steps are being taken to safeguard quiet data.

Tae Royle:
One of the areas that has troubled me the most is how the use of artificial intelligence can have discriminatory outcomes. If you go out and search on the internet, it's riddled with stories. What obligations do lawyers have to address discriminatory outcomes of machine learning algorithms?

Mark Bradley:
So, as lawyers, we have a fundamental obligation to avoid discriminatory practices in our professional role. And clearly that could be breached by the use of discriminatory selection tools. You know, for example, there's been a recent proposal to develop an AI tool to select arbitrators that the purpose is to try and collect diversity in arbitration, which has a great purpose. But unless that algorithm is very carefully designed, you could see how it might select people who are the same as existing experienced arbitrators or very like them. And so in trench, rather than undermine any unconscious biases which influence the selection processes. So I think it's something we have to be very mindful of in legal practice and make sure that where those tools are being used, we really understood the measures being taken to mitigate the risk of discrimination in the design of AI tool. Thanks very much for joining me, Tae. I think it's been an incredibly rich discussion today. Boiling it down, what would you say are the three key things we should do to mitigate the risks associated with AI powered decision-making?

Tae Royle:
I don't think we should be afraid of using artificial intelligence. It's an incredibly powerful tool that can really extend our ability to make great decisions fast in large quantities. But if there were three things that I would hammer on, I would say we have to make sure that we get the training data right. We have to ensure that the decisions made are transparent, and we have to ensure that there's sufficient layers of control and accountability for the decisions made. And if we get those three factors right then we are a long way down the road to addressing some of the really pressing ethical concerns that you've raised today. So I really appreciated speaking with you. It's been a great session.

Mark Bradley:
Thank you for listening. To hear more Ashurst podcasts, including our dedicated channel all things ESG, please visit Ashurst.com/podcast. To ensure you don't miss future episodes, subscribe now on Apple podcasts, Spotify, or your favorite podcast [inaudible 00:22:38] While you're there, please feel free to keep the conversation going and leave us a rating or a review. Thanks again for listening and goodbye for now.

All Episodes

Key Contacts

The information provided is not intended to be a comprehensive review of all developments in the law and practice, or to cover all aspects of those referred to. Listeners should take legal advice before applying it to specific issues or transactions.

WORLD MAP
  • REGION
  • OFFICE

        Forgot Password - Ashurst Account

        If you have forgotten your password, you can request a new one here.

        Login

        Forgot password? Please contact your relationship manager to find out more about our client portal.
        Ashurst Loader