Podcasts

1: How a Chihuahua is like a Blueberry Muffin

10 May 2023

Nina Fitzgerald, IP and Media Partner at Ashurst speaks to Tae Royle, Head of Digital Products APAC from Ashurst Advance Digital in the first episode of our AI series that explores and unlocks the mysteries of A.I and its legal applications.

Nina and Tae share insights into how a machine learns, the advantages and disadvantages of A.I application and explain why a chihuahua is like a blueberry muffin!

There are limitations when you work with artificial intelligence,” says Tae. “it can only work with what you have trained it on.”

Transcript

Nina Fitzgerald:
Hello, and welcome to the first episode of Ashurst new podcast channel, Ashurst Legal Outlook. Ashurst Legal Outlook will keep you across the global trends and local issues that are shaping the legal landscape and impacting organizations. We'll offer unique perspectives on the most pressing legal matters, impacting banks and funds, the digital economy, energy and resources, infrastructure, real estate and more. This episode is the first in a series dedicated to artificial intelligence. This series will explore what AI is, its strengths and weaknesses, ethical considerations, AI's relationship with intellectual property, as well as the future of AI in legal services.

Nina Fitzgerald:
You will hear global perspectives along with local Australian regulatory examples, as we explore and unlock the mysteries of AI and its legal and business implications. So join me, Nina Fitzgerald, IP and Media Partner at Ashurst, as we speak to Tae Royle, Head of Digital Products APAC, from Ashurst Advance Digital, about how AI works and how to break it. You are listening to Ashurst's Legal Outlook. Hello, and welcome Tae, it would be remiss of me in our first episode if I didn't ask you to explain what AI is, and what AI will be talking about in the series?

Tae Royle:
Thanks Nina. When we look at this subject, it's really important to understand what AI is and how it works, in order to understand how it impacts on people. I'm going to draw on a quote from a digital evangelist called Alan Lapovsky, and this is something he Tweeted a while ago and he said, "Not all automation is AI, not all workflow is AI, not all chatbots are AI." For something to be artificial intelligence, it has to learn and improve over time by training. This is a really important concept. This concept that it's continuously improving and learning from inputs that it receives from us, as humans or from its environment. That's what makes it so incredibly powerful.

Nina Fitzgerald:
So, you're really there talking about the idea of machine learning and ability of AI systems to train themselves. It sounds a little bit about, like what we've seen in movies of the past and that doesn't always end up well for the human race, but let's dig a bit more into how exactly does a computer learn.

Tae Royle:
Our computer learns by being taught how to recognize patterns. If you take, for example, an image of a dog, which is something a computer can process. If you look at that image, you can see that the dog has a nose, and it has eyes and it has ears. And these; the nose, the eyes and the ears, they're all arranged in certain proportions from each other. And by and large, for most dogs species, they're all located in roughly the same position. So what a computer can do, is it can process that image and it can identify the location of each of these items, it can measure the ratios of distance between these items and behind all that, it can construct a mathematical model that shows the relationships and spatial orientation of these items. Having built that mathematical model, it can apply that mathematical model to another image and decide whether or not, that mathematical model fits the other image.

Tae Royle:
Then what we can do, is we can take the same approach and we can create a mathematical model of images of cats. And we can teach the computer to understand what a cat looks like. Importantly, a computer has no meaningful understanding of what a cat is, or what it represents. All it has is a mathematical model that it can match, and it can say that, "This mathematical model fits the image of a cat on an 82% basis." So, we're 82% competent that this is a cat. We can also take that mathematical model and apply it to an image of a dog and say, "This model matches the image of a dog 91%." Therefore, we're 91% confidence that this is an image of a dog. Now relevantly, one of the key limitations of an AI model, is they can only recognize things it's been trained on.

Tae Royle:
So if you were then to present to the computer, a picture of an elephant, it would only be able to measure against dogs or cats, and it might give you an answer, "I'm 40% competent this is a cat and only 10% confident that I'm a dog." But it's completely wrong on both counts. We can teach computers to learn, but we always have to bear in mind, that they don't understand. And they can only draw on by reference to what they've been shown when before. Now, you can take that approach and you can also apply it to legal language, meaning you can teach computers to recognize legal language. We're incredibly fortunate in the sense that, legal language is highly stylistic, it's quite rigid in its form and lawyers love to follow precedent. This is wonderful. These computers also love to follow a precedent.

Tae Royle:
Law is a bit like that, bet let's just park that for the moment. If you take something like a governing law clause, it has a very formal structure and lawyers from most common law jurisdictions would instantly recognize the governing law clause when presented in a contract. Likewise, you can train a computer to create a mathematical model that will represent a governing law clause and can recognize that language. That's where we get natural language processing from, it's building these beautiful mathematical models that can run through language and extract the underlying patterns from it. Again, the same weaknesses applied though that we had with the picture of the cats and the dogs. If you only train your machine to recognize governing law clauses, then it won't recognize a change of control costs. And it's important limitation to bear in mind, as we move through the series, is to understand that when you're working with artificial intelligence, it can only work with what you've already trained it on.

Nina Fitzgerald:
I don't think I've ever heard the words beautiful and maths. You so closely together in the same sentence, but I certainly can appreciate the ability to analyze data and then come up with a mathematical model. Is truly quite remarkable. But let's delve a bit more into the advantages of AI. Obviously efficiency is the number one way it's used at the moment, but what are some of the other advantages that we're seeing?

Tae Royle:
Artificial intelligence machine learning is incredibly powerful when it's used at scale. So, if you run a large supermarket and you need to move products onto shelves, or you are a banker and you need to make thousands of decisions about consumer loans, or you're an M&A lawyer that needs to review large portfolios of documents, or your a litigator that needs to find the smoking gun in a vast trove of email correspondence, you need to make potentially hundreds of thousands of decisions. And you have a legitimate interest in making those decisions as fast as possible, as cost-effectively as possible, and in the shortest possible time. Computers are of enormous assistance in helping you deliver against this task.

Nina Fitzgerald:
Can you give us an example of that?

Tae Royle:
To give a simple example, we were acting for the developer of a large project a few months back, and we needed to review 6,500 documents, which were submissions regarding development of the project. Of which a large proportion of many of those submissions were based on a form document produced by a third party. Overnight, we were able to parse those submissions, to extract the form content and separate out the unique content, which needed to be specifically addressed for each person who was providing a submission in relation to this matter. And this is something that we were able to do overnight while the lawyers were sleeping. One of the great challenges that lawyers have, is they have to cover great amounts of distance in a limited time. There's a lot of very large, fast moving transactions that require a lot of consideration, particularly with due diligence.

Tae Royle:
So I can remember one morning waking up, lying on a couch. And I had a yellow stripe down the page in front of me, where the highlighter had fallen from my grasp and striped down the page. And under those circumstances, humans, when they're working late at night, their accuracy rate faults. But if we're using something like artificial intelligence, we always get a very consistent result. We largely know what we're going to get. So we can assure ourselves, for example, in this particular project I'm referring to, or we are able to hit accuracy rates of 96%. When you start to get above 96, 98%, you're actually starting to exceed levels of human accuracy, particularly where you're dealing with thousands or tens of thousands of documents. We can show that artificial intelligence can actually be more accurate than humans, in appropriate use cases, as well as being faster and cheaper. So. It's an incredibly useful tool so long as you keep in mind, some of the limitations.

Nina Fitzgerald:
I do recall many late nights conducting document review in huge discovery exercises. And I'm sure every lawyer would welcome the opportunity to use technology to minimize the amount of time spent doing those types of activities. But there is also a real genuine fear, not only amongst lawyers, but in the community generally, that AI is going to replace us, and we might lose our jobs. Is this our greatest concern?

Tae Royle:
I don't think we should worry too much about computers taking our jobs per say. Because the way technology works is, it destroys some old roles, but it also creates new roles. For example, when you create these systems, you need people that are going to train the systems, who are going to support the systems, who are going to manage the systems, and who're going to supervise the systems. And the amount of work actually carried out by the systems is a relatively small slice of the total work to be done. In many ways, I think there's the opportunity to improve the lives of lawyers. I can remember back as one of my first jobs as a junior lawyer, I was working on Bass Strait in Canada, and we were served with a document production notice in relation to the distribution of movies in Canada. We were acting for a movie distribution entity, which effectively meant we had to produce every single document that, that business had created over the previous five years.

Tae Royle:
We spent five months packing 200 file boxes with an average of 2000 sheets of paper in total, something like 200,000 pages for this production. And as lawyers, we weren't really learning anything at the time. It was just a very much repetitive and mind numbing task. Nowadays we would use eDiscovery software technology-assisted review in order to carry out this work. And I would like to think that, that constitutes a significant quality of life improvement for lawyers.

Tae Royle:
Furthermore, it's been some time since I've actually had to handle carbon paper myself. The flip side of that though, is there are real challenges with artificial intelligence and how it's being used. I think we need to address those issues squarely. Some of it is due to a misunderstanding of how artificial intelligence works and its limitations. I've mentioned the limitations around. If you don't provide it with the right training material. There's also mathematical limitations, these are mathematical models, these are statistical models, and the same sort of problems that you see in statistics also pops up in machine learning.

Nina Fitzgerald:
We know that humans regularly make mistakes, even though we like to think that we don't. Does AI also make mistakes?

Tae Royle:
Yeah, it's really important to recognize that when you're working with statistical models and when you're working with probability, you can get really high accuracy rates. We can get accuracy rates well into the nineties. But the flip side of that is that, you also make mistakes. When you have a 92% accuracy rate, it means that it is statistically inevitable that you will make 8% of your answers will be wrong. We as lawyers really struggle with that, because we like to think that all of our answers are right all of the time. However, that's actually not always necessary in all use cases. I'd like to give the example of a contested bid we were working on, where we're working under incredible time pressure. And we had to review 2000 leases in a week. It was taking people about two hours per lease to review them; about 4,000 hours in total.

Tae Royle:
What this meant was, in a traditional scenario, 4,000 hours is a couple years worth of work for an individual. And you just can't compress that into a one week period. So, what we would normally do is we undertake a sampling exercise and review leases which were either representative of the total portfolio, or alternatively, we would pick out the key most important leases and review those. By using artificial intelligence, you can actually review the entirety of that lease portfolio. However, you might only be 92% accurate. You will statistically inevitably make a number of mistakes along the way. Because of the time pressures on that particular matter, a decision was made in consultation with the client, that there would be no human review layer that would sit above that work. And this is actually the first example of work product produced by Ashurst. Where the final work product was produced entirely by machine without any human supervision.

Nina Fitzgerald:
It's really interesting tale. And I know that you're going to touch more on the ethics of AI in episode two of this podcast series. So I won't ask you for more details in that regard, but it seems like something we need to be really conscious of. It's heartening to hear that humans will still have a role in making key decisions or should still have a role, even if there is some trust that the AI system can make the decision itself. But I really want to get to the title of this episode. There is a reference to two hours and blueberry muffins, and I don't really know where you're going with this. And so, I want to see how we get to this in the context of AI.

Tae Royle:
Nina, I'd like to share with you a picture. This picture shows some of the real challenges faced by AI, particularly when presented with limited training data. So, if you'll indulge me, please explain to me what you're seeing on the screen?

Nina Fitzgerald:
Listeners can check out the show notes to the podcast for a link to the image that I'm looking at. Blueberry muffins and dogs. They look quite similar.

Tae Royle:
So the third row quickly, which one is it?

Nina Fitzgerald:
Blueberry muffin, dog, blueberry muffin, dog.

Tae Royle:
Bottom row, fast.

Nina Fitzgerald:
Dog, blueberry muffin, dog, blueberry muffin.

Tae Royle:
I think you've performed admirably well, despite having been given little advanced training on this, you've shown just how adaptable humans can be in seeing the difference between blueberry muffins and chihuahuas.

Nina Fitzgerald:
Very nice taste. I guess it'd be really interesting to know then how would an AI system look at these blueberry muffins in two hours, would they be able to tell the difference?

Tae Royle:
Given sufficient and appropriate training, I would say that an AI system would ultimately be much faster and much more accurate than a human, but it would take some time to get it right. I also think that this shows how AI can make incredibly glaring errors that humans themselves would regard as ridiculous. When humans get things wrong, they often get things wrong by a matter of degree, they might mistake various breeds of dogs at a distance. For example, you might confuse one small animal for another.

Tae Royle:
When computers get things wrong, they get things horribly wrong. A computer will mistake a banana for a jet airplane, or it will confuse a polar bear for a train. And these sorts of jarring errors are difficult for humans to understand, because a human inherently understands the nature of things whereas, the computer is just using mathematical models. It means that it can culturally be sometimes quite difficult to introduce artificial intelligence tools into a workplace. And it's something that we need to work on from a change management perspective, because team members might legitimately ask, "How can it be that we can trust this tool to make the right decisions when it's so obviously makes such wrong decisions, mistaking a Chihuahua for a blueberry muffin?"

Nina Fitzgerald:
Really interesting things Tae. And I think That's a great way to end our overview on how AI systems function and the intricacies of machine. I've really learned a lot. Thank you.

Tae Royle:
Thanks Nina. It's been a lot of fun and I look forward to our future episodes.

Nina Fitzgerald:
Thank you for listening. This was part one in our six part series, exploring AI with episodes released weekly. To hear more Ashurst podcasts, including our dedicated channel on All things ESG. Please visit ashurst.com/podcast, to ensure you don't miss future episodes. Subscribe now on Apple Podcasts, Spotify, or your favorite podcast platform. While you're there, please feel free to keep the conversation going and leave us a rating or review. Thanks again for listening and goodbye for now.

Keep up to date

Listen to our podcasts on Apple Podcasts or Spotify, so you can take us on the go. Sign up to receive the latest legal developments, insights and news from Ashurst.

The information provided is not intended to be a comprehensive review of all developments in the law and practice, or to cover all aspects of those referred to. Listeners should take legal advice before applying it to specific issues or transactions.