Turning research into solutions and understanding and managing the AI risk – transcript
The following is the transcript of session three of the Artificial Intelligence Forum held on 14 March 2019 in Sydney.
Moderator - Robert Todd, Partner, Ashurst
Panellists - Dan Jermyn, Head of Data Science, Commonwealth Bank; Richard Kimber, CEO and Co-Founder, Daisee; and Tim Brookes, Partner, Ashurst
Robert Todd: We're gonna change up a little bit, or down depending on your point of view and talk about business opportunities and indeed how all of the AI might be affecting the law in terms of regulation, what government might do. Our panellists this morning - first of all, I'll introduce Richard Kimber, here. Richard … he's played on both sides of the fence, big and small. He was at Google for quite a while. He's the CEO and co-founder of a company called Daisee, that's an AI software company and they have a product called Lisa which uses natural language processing to improve call centre compliance, as well as other customer experience. I'm sure that he might even mention that and explain it a bit more to us. It won an award from the Fifth Annual Westpac Innovation Challenge and I think you've won quite a few other awards as well, Richard. Welcome.
Also on the panel, right next to me here is Dan Jermyn and Dan's a data science expert. He runs the largest AI and data science practice in Australia and that's at the Commonwealth Bank. Now Dan's an experienced leader in data and technology and has that rare combination of advanced technical skills and technology in a business setting. He's also been the co-founder of a number of successful start-ups, so he's seen it as well from the big and the small perspective. So thank you, Dan, welcome to the panel.
And finally for the legal perspective, you've met Tim already. Tim's the co-leader of our digital economy group here at Ashurst and has an interest in all these sorts of areas and practices in the area of data which is central to all of this … these sorts of issues.
So, Richard, first off. How did … obviously AI's very complex to write and that affects the roles of data scientists and software engineers, but to really produce innovation do we need to harness other skills?
Richard Kimber: Definitely, I mean, I think it's very much a multi-disciplinary exercise and I think, you know, often people think of AI as just a data science exercise. And for me it’s a lot broader than that in terms of really understanding, you know, what it is you're looking to achieve. The way that you go about producing AI software is obviously quite different to normal software in the whole process of training the algorithms. And so in terms of how I think it's done, you know, I think the element that's probably least appreciated is around the software engineering side of it. The challenge obviously with AI is the vast quantities of data and often AI solutions are being created around unstructured data. And a lot of emphasis historically has been placed on structured data, which is, you know, information that’s in documents or in tables and so that doesn’t really require AI, per se to really be … to be utilised. But, unstructured data is where I really see a huge opportunity for AI and we've seen that both in terms of areas such as NLP but also computer vision.
In terms of the skill sets, obviously the legal skill set is a critical one. There are numerous issues around ethics and privacy and all of those sort of things and so in terms of, you know, how you actually construct these solutions particularly when you have a global perspective, you have to take into account the different jurisdictions as well. So, yeah, a very complex area, I think, you know, it's super exciting and I think offers enormous opportunity.
Robert Todd: So tell us a little about Daisee or one of your other products and the sort of team you'd have working on that product.
Richard Kimber: Sure, so in terms of our product it started in a university as a lot of machine learning type of applications do and we spun that particular technology which is a semantic engine which looks at the meaning of words, not just key words, but actually looking at context and looking at how you infer the outcome of a conversation. And what we have done with that is then re-engineered it from a university style discovery exercise into a software platform so we've completely rewritten it in a different language which is a highly scalable language. We've also made it very much "Cloud ready" as it were so we deploy our solutions through the Cloud, as many AI companies do. So, there's quite a large emphasis on DevOps or the capacity to actually do, you know, the actual software construction. We also then have product managers and product leaders and I think that's obviously one of the key elements here is really understanding, you know, how the workflow is going to be impacted by the solution and being able to integrate it into existing workflows.
One of the challenges we see in many Australian institutions is there's a lot of legacy technology, so one of the things we're focused on has been making sure that our solution works like a middleware, so it can sit on top of existing systems so you don’t need to un-bulk your systems of record or try and change all that stuff which can take a lot of time. And so the other skill sets have been around change management obviously and working with the people involved, and I think obviously that’s a big task here is to bring people on the journey with us, and I think, you know, building trust and engagement is a key element of accepting these AI solutions.
Robert Todd: And just developing that a bit. You've worked with a lot companies, some very large ones.
Richard Kimber: [Agrees].
Robert Todd: What sort of obstacles do you face when you work within that company setting?
Richard Kimber: Numerous. [Laughs]. You know, I think obviously there's quite a lot of variation in the awareness around AI across the Australian landscape and we did a survey that showed that there are a lot of companies that are investigating or dabbling in machine learning and AI tools. But, there were quite few that are actually putting them into production, I think, CBA is one of the few. There are other companies that have data science teams, but they're typically looking at historical information, they're not in the business of prediction which for me is one of the distinctions between AI and BI and you know, there's a lot of companies doing BI. Which is I think, you know, interesting but not necessarily in the same realm.
So the other obstacle, I guess, we need to face is as a smaller company facing into the procurement process that companies have. They very much like to work with the big end of town, you know, the old saying no-one gets sacked from hiring IBM. I think that’s probably going to come under review at some point. So, you know, obviously we need to change the mindset around big companies working with smaller ones. And then I think finally in terms of the obstacle we face is often it’s a "not invented here thing", which is, you know, we think that’s pretty cool, but we'll build it ourselves. And in reality we don’t see a lot of capability in terms of actual software engineering in AI in Corporates in Australia.
Robert Todd: Yeah. Dan just picking up on that … from inside a large organisation.
Dan Jermyn: [Agrees].
Robert Todd: A lot of talk about job losses, but what are the job opportunities arising from the sort of organisations your dealing with?
Dan Jermyn: Yeah I mean it’s a great question and I think there's a lot of hysteria about killer robots coming to get us, a dystopian future and the previous chap sort of touched on some of those things and you can understand this of course. It's easy for us to imagine something that’s like us, thinks like us but wants to take over what we're doing. And this is nothing new. I think in terms of jobs its important to look at this from kind of two angles.
The first is the sort of historic interplay between technology and progress. If you go back to the 1800's where I'm from, the Luddites were smashing weaving machines because they were worried about the implications. I'm sure at some stage there was a caveman who was like, you know, "it's great that you've invited fire, but think about what you've done to my mammoth fur coat business". Less glibly I am very sensitive to his concerns is because where I am actually from is South Wales, classic "South Wales" not the new and improved version.
Richard Kimber: [Laughs].
Dan Jermyn: And … this is a bit better, I hope my mum and dad aren’t watching. And I'm from Cardiff, which at one point, not a whole long time ago, was the busiest port in the entire world because of the local industry around there, coalmining. And there all sorts of reasons why that kinda has gone away - political, economic, but technology has certainly played a part in that. You look at Cardiff now and see a growing, thriving community with a technology hub and a lot of jobs in media and I think the World Economic Forum predicts that there will be net 50 million additional jobs in the world as a result of AI. I think we need to focus on what it actually means, what are the changes that are coming, because there certainly will be disruption.
But for me, I think, it's mostly in terms of job automation, task automation rather than job automation. So they've tasked themselves, the ones that are easily kind of repeatable, perhaps dangerous or dull or not, you know, the best use of our best capital of human resource, are the ones where we all kind of see the most growth. And certainly that’s true for us at the Commonwealth Bank so if you think about (when you think about banking as well and) that issue of technology and what it means for jobs.
The ATM was invented at one point and the ATM stands for the Automated Telling Machine, probably the worst bit of branding of all time. You can imagine people at the time being terrified. You know, you've tried to automate me, a person. Actually what ATMs did was, you know, reduce the cost of serving, increase the number branches that banks were able to have as a result.
If you think about today and the Commonwealth Bank and probably our flagship implementation of production AI would be our customer engagement engine. So, this … and again, it’s a huge Australian success story, we think that the most advanced version of its kind in the world, it's how we manage our communication with our customers. So every day, I mean, every time we make a decision with our customers we're analysing 30 billion data points in real time, across 19 different channels, across all of our customers. So, an absolutely incredible amount of data being processed at ludicrous speeds that couldn’t possibly have happened previously through a human. Now what we actually do with that is apply machine learned models to try to predict what's the best thing to talk to a customer about at any particular point in time. And what we've found that’s done is get the best out of our brilliant asset, our best asset, our people. The human interactions that we have with customers are outstanding and the feedback for the Commonwealth Bank and we've been in the news a little recently, you might have heard about it. [Laughs]. Feedback on our frontline staff has always been first class. You get a great experience when you speak to people, who generally want to help you ... genuinely want to help you.
What we've used for the technology now is just improved that interaction. So, we provide more time for our staff to talk to customers about the right thing by analysing all of that data. We know that they're able to talk to ... what matters for that customer at that moment in time, based on what they've done previously, rather than kind of guess around it and maybe have a kind of very prescriptive view of how to [view] certain customers. So, we actually get more personal, have more human interactions as a result of the application with AI.
Robert Todd: So, what do … just to broaden it out a bit, what does … what do other businesses and more broadly government or the community need to embrace AI in Australia. I mean, where do we sit in the world? What do we need to do?
Dan Jermyn: It's an interesting question because obviously I … you'll tell from the exotic accent and my previous story I'm not from these parts and I've been here for just under two years and I spoke to a lot of my colleagues and I said "I'm going to Australia, you know, what do you think?" . They all said, "Great weather. Not sure how advanced the kind of tech scene is out there". I find it to be fantastic, I have to say. But sporadically and in pockets and I'll tell you what kind of why I think this.
I think, to answer your question directly, what are the conditions that we need in Australia to be successful. Two things. Really above all else, one is the kind of educational aspect and we've heard already about the kind of universities in the local area and the prodigious rates at which data science is being taught, learned, developed here and I think it's outstanding. It’s a match for anywhere in the world and certainly the graduates that are coming through my department right now are incredibly capable. I've been blown away by the kind local talent that’s available in Australia. So, in that sense I think we're in a good spot … we need not to be complacent, but in a very good spot.
The second is the kind of infrastructure and this is one where we need to get I think serious locally. So by that I mean, if you think about autonomous cars and things that are gonna depend on very fast decision-making based on very large amounts of information out there in the wild, we're gonna need very good infrastructure to able to process and transmit that in real time. So things like 5G networks become very, very important. This, I think, is gonna be a key thing for the success of the Australian economy. I mean, if you take an example from Europe, Estonia. So in the post-Soviet era, Estonia emerged and the government makes a very big bet on this thing the Internet. That sounds important, let's invest in that. Free Wi-Fi everywhere. Fast forward today, Estonia is one of the most advanced technological countries on earth, has one of the fastest local networks anywhere on earth and is incredibly innovative. They've been doing Blockchain for internal government purposes for at least ten years. So I think it can be done, but it takes concerted effort of macro level for the country, I think.
Robert Todd: Just coming back into the organisation and perhaps I can get two views on this, one from within and one that’s a provider. But, what do organisations need to do to deliver an AI solution? What are they doing, what's missing? You first Dan then I'll come to you Richard.
Dan Jermyn: Again, a great question and obviously one that …
Robert Todd: Is it investment or support, what are the challenges there?
Dan Jermyn: There are lots and like any good scientist I suppose the answer is it depends. But, I think there are kind of two things which are universal in this space. Number one is the data itself, so particularly in a large organisation like ours and many organisations around Australia that will have grown organically over time. Bought a lot of kit as new technology comes out and will have found that they have lots of disparate sets of data in different formats in lots of different places. Now the view for the conditions for AI to flourish and you know, Richard alluded to this earlier, you talk about data on a vast scale, and you talk about unstructured data and easily kind of repeatable things humans are good at. You need well labelled data for the computers to learn and to train on that data set. So, investment in the kind of, well I would … and apologies to anyone in the room, called the boring stuff. The data layer, getting all your data lineage right, knowing where the data comes from, having it in one place where you can access everything and apply micro services over the top of that is extremely important at the outset before you even think about hiring a data scientist. And the second and I think even more important thing, is the governance around that. So the people that you put around that to implement controls, risk management, to make sure that we understand how these things are being applied. There are a lot of new challenges that will come out that we've never really had to kind of face previously and its incredibly important that we take our own kind of personal accountability for the things we are doing and take great care to kind of be transparent about how we're using this data and be knowledgeable about what we're putting into the machines and what we're getting out as well.
Tim Brookes: Do you think we need more access to open data or do you think we need to improve in Australia the making available of data sets for the development?
Dan Jermyn: I would strongly encourage it, yeah. I think the more we do we that, the more open and transparent we are, the better everybody will be and I'm a huge believer in that.
Tim Brookes: [Agrees].
Dan Jermyn: Certainly I think again, I can … I can speak to the Commonwealth Bank and what we're doing internally, so it's absolutely fundamental to us but we are there to improve the financial well being of our customers and communities and be fair and transparent about that. It is why you'll see some of the investments that we make in things like spend tracker notifications. We have a large amount of financial and transactional data flowing through our data centres, so we're taking great steps to make sure that we're being very, very transparent with customers about where they are and providing them with the tools to understand better the data that we hold about them. That's something that I'd like to see encouraged across the entirety of industry, I think its beneficial for everybody as well as us.
Robert Todd: Richard, what are the challenges for someone going into a big organisation to get an AI project?
Richard Kimber: Yeah, I think ... I definitely agree with what Dan was saying before. I think in addition to that, I think there was … there's also I think an organisational challenge when you look at how most companies are organised. They're not necessarily organised for an AI world and beyond organising the data, I think its about organising the people. And you know, if you think about how functional silos often work in companies, they don’t actually enable AI and in fact they create boundaries around the data, but also around the skills and so on. And I think from that perspective there needs to be quite a big "rethink' about how we're organised and how we manage our companies.
I think the other thing is, there is a very disparate level of understanding of the power and potential of AI. And so, you know, certainly within the data science community its very well known. I think if you start to get beyond that I don’t think that the leadership groups, certainly not in Australia, have embraced the potential of AI anywhere near as much as we see overseas. So, I think in terms of the barriers and the challenges, I think there is an awareness gap and there's also a … some, you know, talk about the fear thing. But, I think ultimately it's about not really having the breadth of imagination around what could be done. And I think particularly when you've got a lot of legacy in your systems, you know, I think that’s one of the big challenges is … you know, in a lower growth environment people are reticent to invest a lot in something new.
And so I think one of the things we do need to do is take that long term view and say, you know, this is a structural change, this is not some incremental, you know, fad that just kind of came up. This is the fourth industrial revolution. That’s how it's been coined. Countries like the UK have a technology strategy, Australia doesn’t. What is our technology strategy? And if you then also talk to the companies, do the companies have an AI strategy? No, they don’t. So, you know, at the end of the day I think, you know, strategy is about making choices and I think where we're sitting right now is, I think, a lot of people think this is something that, you know, might go away or it's not gonna have much impact. I think, you know, it's easy for us to sit … because all we do is live and breathe AI. Companies are sitting there doing there BAU [business as usual] things. I don’t think Australia has grasped it and I really think there needs to be a big shake.
Robert Todd: Yeah. Tim, your turn. In Liesl's presentation we saw some ethical issues arising, I think, around in particular James's situation there, chatting to the computer and you know, brings back to mind, you know, as Asimov's First Law of Robotics. But, how do we translate those ethical considerations into … around AI, into some form of law?
Tim Brookes: That’s an awkward question, but … and I'd be interested to hear my fellow panellists' views about what's practical but I think there's probably three approaches which could be adopted and there being some countries further on the curve with particular approaches. So there's the principles-based approach so the European Union for example, is encouraging ethical principles into the design of software. So they seem to be heading down the path of trying to control the impact of AI through ultimately setting laws that are principles-based which presumably would put some accountability on the creators of the software. That slightly begs the question about what happens when the created itself, perhaps becomes the creator.
The second approach is, I guess, the standard approach which is the law reflecting … indirectly reflecting ethical approaches. And I've bored my team with this example, but I'll share it with the room. So, with our current rules of the road for driving we … there was a famous incident where Uber … an Uber self-driving car killed an individual. Uber withdrew the car from the road immediately and I Googled the statistics for US car fatalities that year and it was in the thousands, and so we seem to be holding a different … taking a different standard when we're dealing with automation rather than human behaviour. But, the other point is we've also encoded into our laws a degree of tolerance so the road rules set speed limits and how you stop and park and all of that stuff, and how humans can cross roads and interact with vehicles. All of that said ... so there's a question of utility for driving versus injury and death.
Robert Todd: [Agrees].
Tim Brookes: And so ethically we've accepted there's a degree of death with driving. Are we gonna take that into the future when we deal with automated vehicles and will we just say "so long as they obey the road rules, that’s fine". And if you have trolley, you know, the old "trolley dilemma" where it can go left or right and kill one or three that's just … that will just happen.
Robert Todd: Just picking up on that, I think the statistics are that about 30,000 people get killed in the US each year from car accidents and hundreds of thousands get injured or maimed. And you know, I think in particular going back, Google have been touting the fact that they'll be … autonomous vehicles will lower that sort of figure, so that’s an interesting thing, but …
Tim Brookes: The third approach is a [inaudible] approach.
Robert Todd: Oh, third approach?
Tim Brookes: Sorry, just to finish off your question. So I think possibly where we will head is trying to put a principles-based approach around the design and operation of AI and then we'll have to have a set of rules that actually have legal consequences if things happen.
Robert Todd: And what are the significant legal issues that are going to arise here in Australia from AI?
Tim Brookes: So there's the obvious ones which are fairly well … a fairly well trodden path so I won't spend a lot of time on them. There's huge issues around privacy for individuals ... as Liesl's seminar I think highlighted things like the … having AI just pervade your life is just endlessly collecting personal information about humans, how is that going be effectively controlled. Will individuals get back control of the information about them? Huge amount of issues there about what happens to your information on your death, is it something that should be inherited? There's an inquiry in New South Wales about that very issue now. So privacy is a massive thing.
Liability for the actions of AI if it truly hits singularity is going to become a pretty critical issue, up until then I think probably the time honoured principles of the manufacturers' liability may be good enough. But other interesting issues are starting to emerge - so we're seeing in America, Elizabeth Warren is campaigning on an interesting platform which … which if you take it to its logical conclusion there's gonna be some AI which presumably will start to develop monopoly characteristics because of the network effect and how is society going to effectively regulate that so we don't end up with a monopoly operator in some key areas. So for example, the classic obvious thing is automated vehicles. If you have a platform for operating automated vehicles, is that going to be an "open access regime" or is it going to be a "closed regime" where you have one organisation operating it from vehicle to service to system. Runs it all.
So that's quite a challenging issue and then generally regulation of AI so I think, you and I have had a chat about this, the ACCC and its Digital Platforms Inquiry, tipped its hat at the regulator getting access to algorithms to regulate AI. I suspect that would put a shiver down most developers' spines just in the sense of actually having to give up their proprietary technology to a third party not for any [inaudible] …
Robert Todd: How do you feel about that Richard? I saw you nodding your head there.
Richard Kimber: [Laughs]. Well it’s a very interesting issue because, you know, we do talk to the regulators and you know, we're working very closely with them around our technology specifically and you know, one of the … there's many ethical and legal questions it raises because certainly with our technology we're monitoring every conversation in the contact centre.
Tim Brookes: Yeah.
Richard Kimber: And then the obligation, you know. What we do is we uncover enormously more amount of issues than companies thought they had. And so the questions there arise around well, you know, is there an amnesty where, you know, you kind of reset how you deal with this because, you know, if you think you're sampling 2% of the calls you think this particular issue occurs, one in a hundred and its happening 40% of the time, you've got a major systemic issue, you didn't know about it. And so there's a lot of those sort of questions as well around the practically of it. In terms of the question around the algorithms, we also have this challenge because we're a business-to-business company and so we often come up against this issue around who owns the data and we work with big companies. We'd love to work with the CBA so you know, hopefully one day that might happen. [Laughing]. We hear it's a four-year sale cycle so we've only got two years to go. [Laughing]. If we're lucky.
But, you know, jokes aside, you know, when you're a big company you've got all this information, that is an enormously powerful asset. You know, so companies like CBA, not to pick on them, but other companies as well, they realise that this is, you know, a fundamental asset and as much as personal information is in an asset, imagine the power of having the aggregation of all that personal information that the banks do. And not only banks, but insurance companies and others.
And then when you turn to the technology companies, you know, and I think Elizabeth Warren's point is well made because, you know, as a smaller company, you look at these large tech companies and not only do they have the network effect, they've got more money than countries. They've got no debt. They've got no reason to be accountable and who are they accountable to anyway. So I think actually even though I did work with one of them for a while, you know, you start to get … and obviously there's been this huge rebellion against Facebook because of all the things that happened unconsciously as Mark Zuckerberg would say, because the algorithms are starting to do things he didn't expect them to and the Russians created all these fake profiles that, you know, did all kinds of things. So, you know, and there, when you hear … and Sheryl Sandberg has been quoted even last week, you know, their response to privacy is to have 40% of their workforce focused on it. But to me that feels like it's kind of after the fact, the horse has bolted, you know, how do you stop it?
Tim Brookes: Yeah, it does raise an interesting question which is, should the regulators automatically be getting access to algorithms and information. And then it takes you down that rather, potentially slippery path of what is happening in China with the social credit system which is quite a ...
Richard Kimber: Well that's the other … the opposite end of the spectrum whereby the government has the control, you know, I don't know if that's actually better or worse.
Tim Brookes: It's a very good question.
Robert Todd: I'm keen to open up questions to the floor because the wisdom of the crowd sometimes comes up with a great question. One more question perhaps for both you, Richard and Dan though: How does Australia compare to the rest of the world in investment in AI both in dollars but also in human capital?
Richard Kimber: Yeah, so I'll jump in quickly I think all the facts speak to the fact that Australia is miles behind. And unfortunately, you know, when you look at the different metrics I think we do … I really do agree on the talent here ... I think that we do have great talent, we have fantastic resource in the people we have ... unfortunately we're not putting our money where our mouth is though. The Australian Government is investing $29,000,000 in AI, the UK government has committed £980,000,000 in the next few years. The UK are orders of magnitude ahead of us even on a relative basis to the population, and they have an industrial strategy that calls out four grand challenges and AI is number one of them, and opportunities, it's a challenge and an opportunity.
So in terms of the UK, you know, I think we should definitely be looking towards what's being done there and it's no surprise that CBA has tapped the UK to bring in a Head of Data Science because the things they're doing there are, I think, ahead of what we're doing. Obviously people talk about the US and you know, the US gets lots of focus, but China again, it's billions of dollars that are being invested, so I think whilst we've invested this enormous amount of money in the NBN, you know, I think that in a way and without talking about the politics of it, is really "step one" of the journey and if we don't spend at least as much as we've spent on the NBN on AI we've really missed a trick.
Robert Todd: Okay. Dan?
Dan Jermyn: Well, let me say firstly, I've no comment on the subject of our sales cycle. [Laughing]. We'll throw that to somebody else.
In terms of where … I think I've already answered this, I think I can certainly speak to the Commonwealth Bank of Australia and let me tell you I have the best job in the world right now. It's an incredible place to be doing data science and artificial intelligence. And I think what drives me on most is the kind of the ways in which we're using it and the mandate from the very top to use all of our capability to further the financial well being of our customers and our communities, which is incredibly important to me personally as well, you know, there are a lot of different things that I could be doing. And I came here for a very specific reason, one of which was the bank and what it stood for.
The other I think is just the opportunity out here, we talked about the labour pool and you know, I am very glad to be here but I do think there are lots of other people around who could do my job very well too and I'm, you know, conscious of that, I'm doing my very best every day to stay in this job that I love so much. I think probably more broadly there is inconsistencies, I think yeah, Richard will be more aware than I am of the wider corporate landscape as he works around a lot of other different companies and tries to see ... not all of those will have the same advantages that the CBA has in terms of the size of investment … the size of data, you know, it's an incredible privilege for us to be able to serve so many customers and to manage so much of their kind of … their day-to-day interactions. And we have to be very, very cognisant of that and respectful of it.
I mean, to come back to the question about regulatory aspects of it, I think it's absolutely right that there is a higher bar to the standards for AI than there are for the kind of human equivalents of that because it's less well understood. And dare I say it, you know, with no disrespect to anybody here, less well understood by the regulatory and legal, you know, I couldn't tell you everything that our guys are doing from a coding perspective, there are vast stores of knowledge, you know, unlike the kind of dot.com boom which was obviously technical in nature … data science is deep and it can be extremely deep, which is why a lot of the people who are kind of successful in the application of it now are PhDs in a way that previously you know, it was, you know, a first degree would be enough.
So I worry about that which is why I think we're heavily investing in explainable AI. It is very, very important to be able to be transparent and explain what it is that you're doing with this data. I think that if you read around the topic you'll see people who will say, "Well that's not our job", you know, our job in AI is to create you know, the best possible solution as close, you know, artificial intelligence but as close to real intelligence as we can. And let the philosophers and the governments sort out, you know, whether that's right or wrong and how they do it. I don't care how they do it, look how good my linear is. I don't agree with that all I think there's a great moral responsibility on any of the practitioners in this space to be doing good things with it and because we won't always know what the outcomes are, to be incredibly transparent about how we apply this and to know what it is that is being produced into production.
Richard Kimber: Yeah just to build on that I think that's certainly an area where I think there is an opportunity around the algorithms. Explainability is a core construct and from that basically what it really means is that looking at the data provenance, looking at what data drove what outcome and then being able to explain bias. Because I think that's obviously one of the key challenges from an ethical perspective is, if there's an inherent bias in the data, you know, you need to at least understand what it is because obviously the AI will become very clever at replicating that bias. And I think that's one of the challenges, so when you talk about doing good for society, obviously that's a key thing. And you know, what we wanna be able to do is to say, you know, to the extent there is a bias, a gender bias, a race bias or any other kind of bias in these decisions that allow you to have a loan or don't allow you to have a loan, it will give you a certain credit limit. You know we need to be able to explain that and then proactively say, "Well actually, you know, we wanna reverse that bias or we want to, you know, even the playing field". And I think being able to use that explainability construct is really important.
Tim Brookes: One of the legal approaches has been to allow a human or require a human review of an automated decision, is that a solution or is that just gonna … will that be the way we fill in all of the jobs do you think?
Richard Kimber: [Laughs]. I hope not [laughs] that will be a terrible job [laughs].
Dan Jermyn: I don't think it will fill all the jobs, I think … I'm very bullish on the subject of the jobs, I think we have more better jobs and remove those that are kind of highly automatable for the good of everybody. But I think the important thing is to be able to explain the output of anything that you've done before you actually implement it as well. So let's be clear on the distinction between kind of discrimination and discriminatory things and unfairness. And this is where we do probably get a little bit philosophical but I think a model by definition is discriminatory, that’s what it's supposed to do. It is supposed to try and understand how to treat certain sets of data differently based on what happens and there's no inherent morality in that. You can't systemically defend for fairness which decides, you know, I absolutely should not use your race, ethnicity, age – things like that for credit decisioning. You can't systematise fairness because there's no single computable answer to what is or is not fair, this is a moral, ethical question for humans.
Robert Todd: So judges are not obsolete quite … quite yet.
Dan Jermyn: There'll be a few "robo" judges but not many. [Laughing].
Robert Todd: We've got a question here ..."
Tim Brookes: I just wanna ask one question if I may?
Dan Jermyn: Yes.
Tim Brookes: So do you think its … do you think its sensible then to put the onus on the operator of the AI or the user of the AI to monitor it for bias rather than trying to achieve a particular outcome by a principle?
Dan Jermyn: It is absolutely essential for any model that gets deployed into production to be monitored consistently and continuously to correct for things like, staleness or activities that we have not been able to predict would happen, so I know, take a very kind of basic example, so we will be regulated about what we can and can't use into our models, so it'll say, you know, age, absolutely not. Let's say there's some kind of thing that we're doing and it's inherently wrong to discriminate against customers on the base of their age for this particular decision. Then hand it over to the machine and algorithm and it does a very good job of modelling whatever it was that you're asking it to and it turns out that the most predictive element within that was tenure right, which is obviously incredibly closely correlated to age. So I've not discriminated against you on the basis of age but I have incidentally, because it turns out that I can kind of tell how old you are by how long you've been a customer some of the time.
And so what's really important now in the future a kind of ethical AI is not to focus solely on the inputs which is what we've done so far, where we've been able to explain what's happening because we, as a human, have had to build the thing ourselves into what are the outputs. What have we done to the groups of customers, people, humans that we have attempted to categorise. AI left to its own devices would create an extremely good model for you because its working on huge data sets and all it cares about is creating the best result in abstract. It doesn't care necessarily about what 's done to little micro groups within that because the overall result is better than the one it was trying to replicate so it's still absolutely inherent on us as the implementers of this thing to understand control, monitor and not allow those kinds of discriminations to happen.
Robert Todd: Okay we've got a question from the floor.
Al Mackey: Al Mackey from the University of Sydney Foundation Program. This is just a tall question following on from the idea of explainable AI. So I saw a paper recently that argued for a greater role for ethicists working in the development of AI and having them on, you know, developing ethics codes and that sort of thing, and one of the things the writer argued for, he said that there ought to be … ethicists ought to be involved in a sort of ethics audit trail which explains the various decisions and how they were arrived at and the ethical principles that informed the decisions and the creation of the code. And then it struck me that that's kind of a thing for lawyers too, you know, that … and I guess my question is that there seems to be a sort of rise of ethicists, lots of tech companies are getting ethicists but they've also got legal counsel and so where do the … where's the demarcation of the rules for ethicists and lawyers and that's the …
Tim Brookes: Okay I'll have a go, maybe first, then I can be corrected. I would say it's a pretty crude but probably simple distinction. I'd say the ethicists would help the AI developers and operators develop and operate systems in a way that is ethical, and the lawyer's role and government's … well, it's really government in a way more than lawyers, but lawyers kind of help with the process, is to develop effectively the rules of the road that give effect to what are the ethical principles that will underlie the AI and the lawyer's job is to then interpret that. So I think there's a distinction between what is the right ethical approach to this issue and then the regulation of that.
Al Mackey: [Agrees]. But there might be some overlap mightn't there? And you know, if there are these two groups of people that are having input into a decision, I mean, maybe something like an audit trail seems to require them both really does it or …
Tim Brookes: Absolutely. Auditable and accessible by regulators I think is a pretty important issue from the … putting a lawyer's hat on you obviously want to be able to poke around at the evidence and see what was happening and if there was, from a regulator's perspective it would be malfeasance to take appropriate action.
Robert Todd: Might go, next question?
Lynn Wood: Lynn Wood from "Ideas, Bias and Ideas Platform". The panel is about turning research into solutions and I've heard you talk about open data and Richard mentioned that we don't in Australia have sufficient breadth of imagination in terms of what we do, be it that we have lots of talent. And I'm just wondering about the idea of only allowing, well, government funding for researchers who share … that is scientific researchers ... to share their findings for free. That's already happening in Europe, we've mentioned Europe as well with cOAlition S. I'm interested in the panels view.
Richard Kimber: Yeah, I think it's one of the big topics is how you commercialise research in Australia and it's been a topic of wide debate. You know, we've got CSIRO. We've got Data61. We've got a number of these types of institutes. One of the challenges is how you actually take the IP and commercialise it, and even last night I was at a forum where they were talking about what is the commercial model and how much IP ownership does the university retain. And you know, even in the instance of my company, a university has 5% of my company because we agreed to spin out some technology from them. And I think 5% is about right. I'm happy to have something in it for them but if it was larger than that it kind of skews all the economics and I think that is one of the challenges here. It also then leads us into the broader discussion then around how do you foster innovation in Australia beyond the university and where does the funding come from. And you know, Australia has a kind of nascent VC, kind of community that's very small and you know, it does a little bit, but what happens typically is, we get reasonably good seed funding, a little bit of angel funding, potentially 'Series A' and then the companies leave.
And so what we've really got to do is figure out how we bridge that gap when companies get beyond you know, their first commercialisation into actually scaling up. And you know, it is a broad systemic issue here. So the idea of taking research and making it widely available I think is a good one. You know, we have an issue in Australia around our education, I think where, you know, we've been focused very much on the monetisation of education with international students and again, I think we need to think carefully about how we retain the output of our education in Australia and then how we apply it to our companies.
And I think one other point I'd make is that, you know, what happens in Australia is that the largest employers are small businesses and although the CBA is pioneering and doing some great things in AI, I think we also need to give thought to the fact that, you know, so many small businesses are actually missing out. And how we can democratise the access to AI is a big issue, because that is, you know, fundamentally there's a … there are a number of barriers for smaller and medium companies to embrace the opportunity of AI and what we don't want is the American phenomenon where the big gets bigger in a technical sense because they've got the resource, the funding and all that. So in a way we've got to tip this on its head and really think about how do we actually get more of Australia participating.
Dan Jermyn: Yeah I agree with everything you said there and I think, you know, we have to be cognisant of the fact that the Internet has democratised access to knowledge and this is the landscape within which we play and we should embrace it. Again, I speak from a kind of … from the Commonwealth Bank's kind of perspective. As a large organisation, we do a tremendous amount of work with the local universities and have a lot of relationships and a lot of PhD interns who come work with us. Their research will become publicly available. Obviously, there will be publishing based on work that they're doing with us, and I think we're contributing to the kind of wider growth of AI knowledge throughout the world, not just in Australia as a result of that. Also, I think incumbent on us, where we can, to kind of showcase and share some of the stuff that we do. I mean, we're a corporation, obviously some of that will be proprietary and we'll obviously need to keep. But things like our studies into financial well being. So there's … you can look this up, we've done a published … excuse me, published paper on financial well being in association with the University of Melbourne and it talked about the impact of consumers on their spending habits and how they would perceive their own financial well being versus what we as a bank would observe to be the case, and you can see big discrepancies there. And I think it's important that we kind of take responsibility to be able to do more to make that kind of social behaviour transparent and share that. I would hope that other banks and other corporate institutions around the world would similarly attempt to kind of follow our lead on this.
Robert Todd: Dan, do you have an internal incubator just devoted to research ... looking at specific projects?
Dan Jermyn: Yeah. So when we talk about AI, there are kind of … I think of it in kind of three different buckets for us within the CBA. One would be the kind of … the application of pre-packaged AI. So things like people have talked about, you know, chatbots and other things that are kind of software-based and contain AI within them. The other would be the kind of standardised approaches to machine learning - things like AutoML. So we're at the stage now where … and I never tire of telling him this, my boss, who's got a PhD in statistics, which took him four years, can replicate and in fact improve his work in about two minutes now by pressing a button. Sorry, Andrew.
[Laughter].
And that's great, and that's how best to kind of deploy its scale in a way that we never previously could. But the third and probably the most exciting for me is the kind of cutting edge stuff that we do, which doesn’t necessarily have a commercial application at the outset, we think about what we can do, we ask big questions and certainly, yes, we do have that kind of free thinking "lab-style" approach to AI as well internally.
Robert Todd: And just picking up on the previous question as well, you were explaining how your people are enthusiastic and wanna just get the best AI they can but are they exposed along the way to the ethical considerations? Is there a sort of internal debate about that constantly or how is that managed?
Dan Jermyn: It's always front of mind. So, I think culturally within the organisation now it's … we're very aware of the need to be fair and transparent to our customers and the communities that we serve and it's an absolute requirement of all staff at all … throughout the entire organisation that this is core to their training and the way that they behave so everything that we do starts with that "customer first" mindset and the people in the data science team are absolutely no different to anybody else in the entirety of the organisation from the people who serve on the frontline, to the people who fix the computers, everybody has that ingrained into them that everything that we do should be for the financial well being of our customers and communities.
Robert Todd: I think we have time for at least one more question, if anyone …
Jack Braithwaite: Hi, Jack Braithwaite, Benevolent Society. Just on the ethics piece, I think it's interesting that when we talk about ethics and morality and AI it's often sort of a presumption that humans are the guardians of morality and can always make decisions for AI and be the judges of that, although, in reality we know humans are quite biased and quite unreliable and also have their own histories and what not. What are your thoughts on AIs actually potentially making better moral decisions as opposed to people through following a very strict set of, perhaps, rules that humans otherwise be quite grey and fuzzy on?
Tim Brookes: Interesting.
[Laughter].
Tim Brookes: So, a couple of random thoughts. The first, I guess in terms of the … a number of the issues that have emerged with AI have actually been human generated problems so I think driven by the points you make: the machines pick up on human bias through what we do and what we say ( Tay, the chatbot thing a classic example but there's many others).
So, we … in some ways we have to be careful not to make the machine in our own image and I think that’s the point you're driving at and that makes a lot of sense. I don’t know if you guys have any thoughts on that but …
Richard Kimber: Yeah, I think certainly the opportunity is there for a more consistent outcome. I think one of the benefits of a machine trained … a data trained model is you can get something very repeatable and very consistent. Again, I think there does need to be, you know, a lot … a number of checks and balances around that so I personally wouldn’t wanna rely on an AI for morality ultimately, or ethics. I think it is such a big issue and I think it does get a bit back to the self-driving car example where I think we have to be careful that we don’t put it to such a high standard that it's way beyond what is feasible today and what … you know, society is full of so many views and you know, we can't even agree to teach ethics in our schools let alone have our computers know ethics. So, you know, there's so many bigger issues here that we have to be … I think, be a little bit careful that we don’t try and you know, put AI up on such a high standard that we'll never actually get to use it.
Dan Jermyn: I think it comes back to this question of bias and practical considerations. So, I think any system that learns will only be as good as the data that we use and the data itself can be biased and the humans and the program it can be biased and we need to be mindful of that. I think what you're driving at can be achievable in certain closed systems. So, I think the problem with this more broadly is most of the types of things that we think about here are very complex and nuanced, and have implications that we hadn’t properly thought of and therefore could not properly have, you know, put guardrails in for the machine. But if you think about a very narrow parameter set of things then, yes, I think it's possible for machines to systematically make good decisions that are beyond what a human would be capable of, not that a human necessarily would like to do or not like to do. But again, if you come back to the example of the customer engagement engine there we have a very live example of something where our staff used to try to, kind of, have conversations with customers based on intuition or what they saw when the customer came to them and perhaps over a large number of people the same customer might get a very different experience this way because a very controlled system what is this customer going need right now, how can I most help them, that in a very closed sense is easy for us to at least start that conversation through AI. I think the human then takes over and kind of understands and interprets the nuance thereafter, but I think it's at least a starting point.
Tim Brookes: I think the other important issue is the impact of AI on human behaviour. I think is the other part of that equation so there's the input and then there's … as you were saying earlier, the output. So they've done some studies in … I think it was in the US on just some very simple models where they've incorporated … I wouldn’t call it AI because it was … there were just simple games that humans were asked to play around - kind of, free choice to benefit a group versus selfish behaviour to protect themselves, and interestingly they put a couple of bots into the game and the bots, if they were programmed to behave badly things went bad very quickly. If they behaved well the humans became very benevolent.
So, you can see the … I guess it's that old … you know, the nudge theory of government, you could see actually that AI has incredible power to potentially drive human behaviour not just on that one-on-one interaction that Liesl was talking about but actually generally across society, and I guess possibly one of the most speculated about examples of that was the, you know, the US election and possibly Brexit where the outcome might have been kind of pushed in a particular direction by echoing information that spreads throughout our ecosystem, so people think they're part of a majority and this is what everyone thinks so I should think that way or act that way as well. So I think that’s a … gonna be and incredibly interesting and difficult issue to regulate and that’s gonna roll right through how do you regulate the provision of health care to vulnerable people, all of that. Will you … how do you stop humans from becoming dependent on AI, etc. etc. It's a really interesting issue just because of … and it's a problem caused by us, not by the machines.
Richard Kimber: That’s right.
Robert Todd: Yeah. On that statement I think we'll end there and I'd like to thank Dan, Richard and Tim for their contribution this morning. Thanks very much.
[Applause].
Key Contacts
We bring together lawyers of the highest calibre with the technical knowledge, industry experience and regional know-how to provide the incisive advice our clients need.
Keep up to date
Sign up to receive the latest legal developments, insights and news from Ashurst. By signing up, you agree to receive commercial messages from us. You may unsubscribe at any time.
Sign upThe information provided is not intended to be a comprehensive review of all developments in the law and practice, or to cover all aspects of those referred to.
Readers should take legal advice before applying it to specific issues or transactions.