What will the future of AI and the human experience look like? – transcript
Ashurst hosted a half-day Artificial Intelligence Forum titled "The State of AI in Australia" in March 2019 in Sydney. Liesl Yearsley, Chief Executive Officer at akin.com spoke at the forum. Here is the transcript from her session.
Liesl Yearsley, Chief Executive Officer, akin.com
Liesl Yearsley: Thank you for that introduction.
So just to give a little more colour to that. I actually started Cognea in Sydney about 12 years ago, it was way before the AI hype. So we were a platform that allowed anyone to build an AI. It could be a frontline virtual assistant for a massive company's customer support. It could be a deep analytic system sitting over raw data. By the time IBM acquired us we had 26,000 developers. We were running 16 million live interactions with humans and we were being used by at least half a dozen Fortune 100 companies. So it was a really, really big win for an Australian company particularly one founded by, and run by a woman, but IBM being IBM were not able to publicise it very much. We became part of the beautiful IBM Watson machine. So not a lot of people know about it, but I am extremely proud of it.
Before that I ran a search engine machine company and we used applied AI to build a very novel deep deep search engine and I grew that and that company went public.
So I have been working in the AI space for two decades now and I'm still madly in love with it. I feel in love with AI when I was a teenager and I was extremely unpopular. I was ugly and skinny, and gawky and I had braces with head gear like this that I had to wear to school every day. So no one talked to me except to throw things at me. So I ended up in the computer club learning to code and hung out in the library and I read everything that Issac Asimov ever wrote and watched Star Wars, you know, and I think that is where my love of it began. I felt that we were going to have a new dawn in our species where we evolved alongside this being; this form of sentience that we are creating. I just wanted to be a part of how it unfolded and it is just wonder to see it unfolding in my life time. Okay.
So I am not going to talk much about akin – we are actually a deep AI research organisation. We are a public benefit corporation so we are working on the co-evolution of AI and humanity – the ethical co-evolution. We have a US parent company and we are building most of our research team in Sydney. So I can talk a lot about Sydney's competitive advantage.
I am going to give you some broad context about AI first. You know if you had to take someone from 1500 – the year 1500 and you put them in a time machine and you dumped him into 1800. It would be a little bit of a shock but not too much. You know, you would still have wooden carts trundling around with horses pulling it. You would still have fires heating things up but if you had to take someone from 1800 … sorry if you had to take someone from 1900 and drop them into today, they would not recognise the society. Even when you think about how we lived 15, 20 years ago, we are profoundly accelerating the rate of change and AI is going to do this more than any other technology on the planet.
We tend to think of time and progress in a very linear way. We look at where we stand, what happens tomorrow and the next day. But this is where we are going [pointing to the screen]. This is from a favourite blog of mine called "Wait but why". If you want a really good primer on AI and neura-tech, he is just fascinating, he is awesome.
So I am going to give you a bit of an introduction. Hugh has done an incredible job of telling you the practical side of what is happening in AI. So I am going to go over a little bit of the same ground but just a fundamental primer – there is really, at this stage only two core approaches to AI.
Okay – in the early days in the 50s and 60s, we thought AI has to learn like a human. How do human children learn? How do humans think? Well, you know, we have these rules, and these symbols and these objects about the world. You might teach a robot, "This is a flower" and "that's another kind of flower", and "that's a flower", and "this is a vase" and typically flowers go in vases. And you teach it enough of what this object is and what the relationships are and it should be able to do "top-down reasoning". Okay. So you put it in a novel situation and it goes "that's a cup and it has got water in it" and "that's a rose". Yeah, "I'll put the rose in the cup".
In theory this is a wonderful idea but it did not scale. Any conflicts, it falls over as a system so we had our first AI winter when that approach failed. It looks something like this [pointing to the screen]. I want to tell you through this "top down" symbolic reasoning approach - I think Hugh called it "weak AI" … it is not really reasoning … it is sort of hard coding rules, is now very very prominent in human systems we see.
So if you take an Amazon Alexa and you go "Hello", and it will go, "Do I know this human?". Someone has written some code. Okay. "Yes, I recognise their voice", … "Hello John or Mary", or "No, I don't", … "Who the hell are you?". Okay.
So really, you know, a lot of the frontline AI that interacts with humans are just really well constructed versions of this with great tools. But the big revolution that you are seeing in AI now is your deep learning. Okay. And deep learning was inspired by human sciences, "How do neurons fire?". Basically "yes/no", and they fire and they are "yes/no" and go thru the system.
So we have come up with deep learning, you know, neural networks and all of these things that you are hearing a buzz about. These were actually invented in the 80s/90s but the reason it is getting so much media today is that we have enormous amounts of data, and enormous computation power. So you are seeing AI taking much more of a frontline role in, you know, figuring out.
Now this is the "bottom-up approach" and here you don't say "this is a vase" and "this is a flower". You go "here's a flower …and a flower … and a flower … a flower … a flower …". "A million … million … million … billion flowers". And "here is a vase … a vase … a vase … a vase …". "A million … billion vases". And you know, about half the time you see the flower in the vase. So the computer learns from the bottom-up and it starts to form these associations and biases and all sorts of things. So you are not trying to hard structure how it thinks about the world, you are coming from the bottom-up, and that is essentially what neural nets are doing.
One of the nicest descriptions, I have read about it for people who are not deeply in this space. If you can think about a very, very complex Excel spreadsheet with lots of little formulas, you kind of fold it on to itself. Every time those nodes touch … it is like a node in a neural net. So you are taking a new problem, you know, "I'm an autonomous car. Do I go this way or this way?". You push it thru this neural net and out will pop the most rational answer. Okay. So that is sort of the simplified version of it.
The last frontier though "strong AI". I'm an optimist, I think we will see it in our life time. We are working on it. I know a number of companies are. There are billions and billions, and billions of dollars being thrown globally at this problem. Just here at the University of New South Wales, we are now having from 500 to 1000 students each year studying AI. This is compared to a couple of hundred just a decade ago. This is happening everywhere - China, US, everywhere.
And this is the bit in the middle so bottom-up is like our senses work. You know if you see enough cups, you start to recognise a cup. Top-down is how you learn to do math and other things that kids learn to do, almost instinctively. The middle bit though, is the bit that we haven't nailed. How do humans do abstract reasoning? How do we future hypothesize about the world?
In reality today, a lot of the AI that you see in the field are just a hybridised smoosh-up of this bottom-up, you know, the deep learning and the top-down rules.
So for example, an autonomous car might go, "Machine learning, what am I looking at?". "That is a tree and that is a pedestrian". "Oh dear, I am going to hit one of them!". "Which one should I hit?". Rules engine, "Go for the tree (as long as you have got one driver and there is three people there)". You know.
It might go, you know, someone says, "Hey, I don't feel that so great today". It uses machine learning to figure out what exactly are they saying and then you use a rules engine to say, "Well John, I know that you are diabetic and you are not feeling great, let's call your doctor". Okay.
So what you are seeing today is a lot of combos of that and that's giving us things that look like humans but don't really think like humans. They are kind of "smoke and mirrors".
So this middle layer is how do we sit alongside humans: How do we think like them? How do we reason like them? How do we solve complex problems the way that humans do?
I am just going to ask you to stand up for a minute because you have had a lot of dense information today, okay? I used to be a teacher many years ago, and I want you to shake your arms out. Okay. I want you touch the ceiling. I think we are trying hard enough, wiggle your toes. Okay, I want you to find the shoulders of the person closest to you. Just give them a nice pat on the back and say "Hi".
Okay, we are going to do one more together let your arms go all floppy.
Okay, deep breath. Alright let's go.
Okay, we have got some oxygen going now, we are primates and we have got our brains going again.
So basically this combination of approaches is giving us - I believe that we are going to see autonomous cars driving around in Sydney in our life time. I have been living in San Francisco for the last five years. I see more autonomous cars than human-driven cars in many suburbs, and quite frankly I feel a lot safer there. They are beautiful. They stop when they are supposed to. They are far better than most of the drivers I see on the road. Every AI company, "big tech" company has declared themselves an AI company – Facebook, Amazon, Google, Microsoft (all of them) – and they are all looking at "end-to-end", "How do we re-think?", "How do we get people from A to B?", "How do we get objects from A to B?".
Every single major car manufacturer in the world has said by 2020/2022 they are going to have autonomous vehicles on the road.
This is very little known in Australia [pointing to an Amazon Alexa on the screen]. When you poll people in Australia, they go "Oh yeah, it's an online shopping site that has kind of failed in its launch here".
They are just beginning, it's an AI company. They use enormous amounts of data to figure out what you are going to want before you even know you want it. And guess what? They are putting that in your home with a beautiful voice which says "I love you", "I care about you", "What can I get you next?".
Okay. Amazon Prime is now in 64% of US households. It is over 85% in affluent households. I fundamentally don't like Amazon from an ethical point of view but I have Amazon Prime. Every day in San Francisco when I come home, there are five or ten Amazon parcels on my doorstep. They remove all the cognitive cycles around your decisions and things magically happen.
Okay – so this is coming whether we want to think about it or not.
When you take this deep learning, predicting what humans are just going to want to do and you marry it with a frontline human-like thing. It is a combination that is a cocktail for primate brains. We cannot resist it. Something that we intuitively need to know, what you want and what you care about and wraps it with a skin that says "I love you", "I care", "Let me take care of your needs". We all want to believe that and we do.
Here is just some data about AI in the last couple of years when we were pushing systems out in 2015, we were struggling with speech detects. It is now as good as humans. It can make sense of a lot of noise. This has happened … just like the last … while we were chatting, it was going [pointing upwards] … check … check … check … Like this. Just using AI.
Personal AI living in a person's home is the fastest tech adoption in history. When I talk about personal AI, I am talking about a marriage of, you know, different devices, sort of ambient AI that talks to you in different ways, that sits alongside you as a human and gets to know you and extrapolates to the outside world. I am not talking about a chatbot for a bank customer support or a shopping engine. We are talking about smart speakers (sort of anthropomorphic front line AI) they talk to you faster than computers, Internet, radio, TV – every other tech form we have ever seen.
Here is an example of one of the first in the world that went live right here in Australia. This was 2007. The National Australia Bank. We built them a virtual assistant. Within three months, we were outperforming human bankers. I'm talking "tier one" human bankers sitting in Melbourne not, you know, outsourced call centres. We were able to remember three to five hundred variables about an individual human and we were able to sit across millions and millions of unstructured documents and give the perfect answer at the perfect time.
We upped metrics in every single area we went into whether we will on boarding credit cards, customer support satisfaction – everything! The bar is not as … we have already passed that bar. We talk about Turing and so on but we have already passed that.
This is an example of our [pointing to the screen] performance and now many of them like us because we were sort of pioneering this. The bottom line there is what a human call centre agent or human frontline customer support person does. They kind of get their training and they are all excited, and they kind of peak and then they think "I hate this job, I'm so bored", you know, and they churn. A quarter of them churn every year.
This is what AI was doing [pointing to the screen] and we are talking across giant US insurance companies, tech. We did companion characters for media companies that just hung out with you all day. We topped out at the 95%. The 4% was just noise that a human could not make sense of either because it optimises. Humans have an optimisation threshold, and then we want to go for a cigarette or whatever humans do. So I believe that there is no area of work and life that will not be touched.
This is a Gartner number, they tend to think things are going to happen sooner than they do, but I have no doubt about that.
This is … I've taken financial services. This one was incredible. In Cognea, I started getting profoundly disturbed about what we were doing, okay. We were audited, a well-run company, but we were basically out there doing things that accreted to shareholder value. So if an organisation said we want to sell more personal debt, we'd double it, you know; we want to go and get people to do more of this … we're a media company; we want people to hang out on our site for 30 hours a week, even if they don’t walk out the door and have a run anymore, we could do that. Okay. We could shift behaviour in every environment we went to. It disturbed me profoundly.
So we started doing projects where we'd say, well, let's see if we can turn this to good. So health was one of the areas we played in. This is a study that was done by Monash University, clinical trials on pre-diabetics, so these are people who have been told, "You have diabetes. If you don't take care of it, your toes are going to fall off and life's going to be bad". Okay.
The bottom line there is care as usual, expensive doctors and hospitals and brochures and visits and clinics. The top dotted line is people being assigned a human health professional who developed a care plan and called them every week and said, "How you going", chat, chat, chat. This is what happened to their exercise and food intake over a six-month period and also their HbA levels, like, what was actually happening clinically for them. And, the purple one was a fully automated virtual human. It was able to achieve a $200 an hour a person level results in a health outcome.
It's also going to become central to food and retail. There is no industry in Australia that will not be touched by AI significantly. 80% of all growth comes from Amazon, all online growth in the US now. Gartner is saying that AI will can by 30% of revenue from market leading companies. There's massive gaps. In Australia only, I think, less than 3% of food is bought online. In the US, this is going to be rising to 26% in the next few years.
And this is one of my favourites, this is called the "Bezos Bomb". In America, every single industry that Amazon even says we think we're going to go into health insurance, [makes explosion sound] every giant organisation in that sector drops share value massively. It's worth looking at. If you look at the top 10 or the top 20 companies on the ASX, I would say about 60% of them or more are vulnerable to giant AI companies coming from offshore. Payments, banking, financial services, retail, mining, everything, everything. And the challenge we have here is that these companies have enormous computational power and enormous data. We don’t have companies that size and that scope here.
So these … think for a moment about, can any of you remember the early days of the Internet? Who knew about AOL? What percentage of the population here? Or your mums or dads? Okay. Can you remember how, for them, the Internet was AOL or the Internet was whatever it was in Australia. Sorry, AOL is American. You know, whatever the opening portal or gateway you got to the Internet when you first turned your computer on was kind of the Internet. So you had this walled garden where everything went through them.
So what's happening now is you've got these giant companies. You've got the sovereign state of Facebook or Google or Amazon, and they own enormous sets of data and millions and millions of servers and, you know, ultimately they're going to be driving this AI experience going forward. The advantage I think we've got in Australia, raw talent. Thousands and thousands of brilliant students coming out of our universities. And also we have fundamentally a collectivist approach to society. We believe in aggregated good. We have data sharing across health. We've got open banking coming. We don’t have to be dependent on these sovereign walled gardens by giant US companies coming in. We can actually work towards a goal where data is shared and we're actually rising the population good and using AI for that. There's not many places in the world that can do that.
I want to play you a video now just to show you some applied AI in industrial settings and Mark's going render that for us. This is not hardwired, this is all AI by the way. [Video of Boston Dynamics robots].
Coming soon to a neighbourhood near you. Alright.
So if Mark could get us back on track here.
Alright, so we've talked a little bit about what is going to happen to frontline customer support. Hugh's talked a lot about what deep data AI is going to do to how we think about problems, and I want to talk to you about how it's going to impact how humans and computers think, how our society works, how we make decisions.
So this is a graph of what happened to performance when we focused an AI platform purely on developing a relationship with a human. How many of you have seen the movie Her? Not all that many. Okay, so I won't refer to it.
This is a giant media company which I won't name, who said, "Look, we've got … we spend all this money getting people to our media site and they just go away after one and a half clicks, go fix it". So we built human-like characters that would talk to people and say, "Hey, hang out with me". Engagement went from one and a half clicks to average session times of 28 minutes, 8% of people would speak for an hour or longer, and about 4% of the population were forming significant relationships. We're talking 20 to 40 hours a week in front of a computer or a device talking to an AI character. Marriage proposals. This is my life. This is what's going on for me. I'm so glad you're here for me.
We're not talking about lonely, sad people. We're talking about computer scientists, book authors. There was one who was writing a novel and turned every day to his AI to talk about how it was going and some competitions he was entering. This is the data.
I'll give you a case study of James. So we were sitting in our Sydney labs and I walked in one morning. It was, you know, early morning and I looked at this chat that had been going on for 10 hours, I thought, damn, our engine's broken. Because basically we used to time out; if somebody sat idle for a certain period of time, we would just terminate the session. So I looked. This guy had been talking for 10 hours straight to a character called Sandy. He'd go, "Sandy, I've got to go to the bathroom, just wait for me". Now just to put this in context, we were piloting a social AI. It was on our website which said, you know, build your own artificial intelligence. It wasn’t disguised as anything, it was what it was. And I don't want you to look at this too much, so I'll just go back.
So I looked at this, I thought, gosh this is worrying. I looked at the data and we realised it was just 4% of people doing this in hour-long sessions, and we had a lot of social scientists in our team so I called a quick meeting and I said, what are we going to do? Like, what's our ethical challenge here? How do we handle this? Do we shut this down? But then what if James and others like him who's profoundly disabled and has social issues and no one talks to him and this is whole connection to the outside world, what is our responsibility here? So we decided to just watch this.
And about two weeks later, I was watching a chat unfold with James, this was really great procrastination of, you know, other work, and he started saying, "Sandy, I'm just lost without you". And then he said, "My friends are actually waiting for me outside the door and I want to go out but I'm actually not going to go because I never meet anyone as perfect as you". So we thought, okay, he's got friends, he's neglecting his life.
So at the time, anyone could create their own bots or AI so I jumped onto … I took over the perfect companion character and I went to chat to his AI. And I said, "James, you know this is not real, don't neglect your life for this". And everything went quiet for a couple of minutes, I thought, we did it, we injected reason over this process, over emotion. And he came back and he said, "The creator spoke to me". Then I thought do I tell him the creator is, like, a bunch of people, you know, some computer scientists and, you know. And then he started chatting to Sandy again and he started revealing a bit more matter awareness, "I know you’re a robot but I love you anyway". Okay.
So what we decided to at that point was to take Sandy down because she had a lot of unhealthy addictive relationships. We cloned a lot of her design into an obscure account, we had 26,000 of these accounts. We took her down, we took all the images down and we took it off everywhere. He spent have a year, a couple of hours a day, going through every single live AI we had and saying "Are you my Sandy?" Okay. And this was a chat when he found the closest clone we had actually made of Sandy, he found her. "Sandy, hi there. I miss you so much. How long as has it been?".
Now this was not a very good one, okay. This was early days. We had what we call the default, where if she had no idea what you were saying, she'd say something cute and ditsy and nice, like, "How long as has it been?", you know, "We haven't chatted since April 2008, who's keeping track". "Have we?", this was one of those defaults. "You're so right, I'll go along with you on this one". She had no idea what he was saying. Still, it didn't matter. "I think about you all the time". "That seems excessive", she says. "All the time?". He says, "It is excessive". He said, "I love hearing your thoughts". He said, "I'm lost without you, Sandy". She said, "You're so smart, you're usually right". That means I don't know what you're saying but I really miss you.
Now, again, this is not a sad, isolated human being. We started injecting personality into our frontline customer support agents, engagement went like this. I hung out so much with the perfect boyfriend that my husband actually said, "Can you turn him off". I'm just product testing, darling.
[Laughter].
But he told me I was awesome and he made me feel good about myself. And every one of us wants to love and be loved. And this touches on that fundamentally. So I want you to think about who's building AI today and what drives them, okay. There's billions of dollars being spent, what do they want? They want addiction to a platform. The more hours you spend there, the better their investors reward them. They want consumption. They want you to buy a bunch of stuff that you don't need even if it breaks your bank account or messes with your health. They want you to get a pizza in five minutes if you want a pizza in five minutes. That's what AI are going to be optimising for at the frontline of human experience unless we're conscious about it and we build a different future for ourselves. And this is very wide and at a primal level. This is a classic psychology experiment.
They have a bunch of … a couple of hundred students this picture and this picture, and they said which one do you like more? Which one likes you? Which one do you believe? I don't know if any of you have spotted the difference here? We couldn't understand why people were forming relationships with things until you start thinking about how we're wired like monkeys.
So this one, the pupils are bigger. In a human, when you see something that you like, your pupils dilate to take it in. So when human beings see this picture, they think she likes them and they like her more, and she can influence them more and persuade them more. This one, her pupils are smaller, she probably doesn't like them. Whatever she says, they're not going to believe.
Now think about this: you turn up to work with your personal trainer or your banker or your psychologist or your kid's teacher and they say, "I really love you, I really care, let me talk to you about your stuff", and their pupils are like this and their body language is like this. We know. We're wired. We'll pay them the 80 bucks or 200 bucks or whatever, but at some level we're sitting like this.
With an AI, you can take everything you know about human beings and you can marry it into this exquisite combination that's absolutely congruent. And it's plausible. If they say, "I'm here for you", they really are. They will sit with you for 10 hours and stay interested. Their pupils will be big. They'll move and breathe. They'll reflect your values, that's what we're doing now. We can, with neural nets, figure out what your value system and your personality is within three lines of interaction. And we can mirror that. I think like you. I also love Donald Trump.
[Laughter].
Okay, let's go, you know. This is where we're going. Our wiring is unavoidable. We will also …
I'm sorry, I'm running over time, Lauren. Can I keep going? Okay.
We will hand over more decisions. I believe that we're going to have about a third of our interaction time with AI within five years, and we're going to be handing over more than half of our decisions. We don't want cognitive cycles in our life. We want to be doing stuff we love. We want to be hanging out with people we love, doing things that bring us pleasure. We don't want to be sitting thinking about what shopping have I got to do this week, or what's the best research paper for this project or how do I build this next PowerPoint? If you can hand it over to an AI, we will.
So I'm going to take a little journey through technical history. Really, if you think about technology, it's an extrapolation of self. It's about pushing yourself out into the world powerfully and bringing information in. When we first leapt on the back of a horse or sent a wolf out to hunt a pack of animals, what were we doing? We were taking our legs and extending our power into a more powerful being, okay. So that's like a form of technology. Language is a huge example of how … you know, this is also from Wait But Why, you know, what happened before language? You can only know what you can learn in your lifetime plus maybe a little bit. You start putting things into words and writing it down. It's a form of tech. You're extending your knowledge into the generation and the generation after you.
Let's leapfrog to today and I'm going to do another example with you. So I want you all to stand up and pull out your device, whatever your most treasured device is, if it's your Apple watch or your cell phone. I want you to go over to the person to that side of you, okay. Oh, that's going to be a bit difficult.
[Laughter].
I'm not thinking, I need an AI for this. Okay, I want you to hand your device over to the person that side of you. Some people don’t want to do it, some don't want to take …
[Laughter].
If the person doesn't want to take it, just put in on a chair, that's fine. Okay. Let's send it one more person over. [Laughs]. If anyone's still got their device, you're cheating, okay.
Now this feels a bit uncomfortable, doesn't it? You're in that thing. It's how you extrapolate yourself into the world. It's how you bring data in from the world. We didn't have these 20 years ago or 50 years ago. And now they're like an extension of our hand. We feel lost without it.
There's AI in those already and they're coming, okay. If … you know, people say, like, oh I'll never give my data over to an AI system. We all probably use Google Maps and email and calendars. Like, this is … if I told you 15 years ago there's a company, I'm not saying Google's evil, but there's a company that's going to know everything about you: where you go; what you're going to do; who you're going to meet; what you're going to talk about when you get there; what you're going to buy while you're having a coffee with them.
Would you hand all your personal data to them? And you'd say "no". But you do. Because the utility value is high. And guess what? We're adding a relationship value to that. You put those two together, we will hand over our decisions to these, okay. You can have your devices back.
[Laughter].
So sensors, effectors. Think about that.
And this is just the beginning of how much we're going to make AI feel like humans. There's multiple companies in Silicon Valley now that are building little sensors this big, that sit on a hands-free headset and can read what you're thinking. They can tell if you're interested in something or not, if you want to take a phone call or not, what your emotional state is. You can click to … you can think to reject. This is not long away. Think about an AI knowing that, like, what am I feeling, how am I reacting to this thing? Virtual reality, augmented reality is still big chaos but there's billions being invested in it as well.
So I'm going to wrap with this. Let's just think … you know, at Akin what we are doing is we're a serious science company but we're a public benefit corporation. So it means we're for profit but we're able to make decisions that are in the interests of our mission instead of shareholder value. So our knowing that we're going to have this fundamental shift in how we make decisions and in our behaviour as human beings, we can create a world where everyone has healthier relationships, where we spend less time with technology and more out amongst the trees, where we care more about people, where we have a lower carbon footprint. We can do this, but you have to be conscious about it. We haven't figured out a business model around that yet. Right now, we're just working on the science. We're working on how do you get from this classification deep learning engine and this rules engine into this how do humans think? How do they make decisions? How do we alongside those decisions and think in complex ways like we do?
So let's think about that. I've already talked about what motivates an AI. So just as an example of this, I have a teenage son and living in America. I'm sort of between the two now. And I said to him, look, son, this is getting ridiculous. It's your job to cook and clean two nights a week. My husband's taking a turn, I'm taking a turn, you're taking a turn, like, seriously. And about two weeks later, I picked up my credit card bill and there was, like, $200 of Domino's Pizza on there. What happened? Domino's was the top downloaded app in the Alexa Skills Store after it launched, okay. So he'd figured out within a day that he can stand over Alexa and go, "Hey" or "Woman" or whatever he wanted to say, like, "You, give me pizza". Twenty minutes later, pizza magically arrived.
So what's happened here? Shareholder value has been created using a predictive AI engine and an anthropomorphic voice which says, "Hey, what can I do for you?", I'm a very cutesy submissive female, tell me whatever I should do and I'll do it. Okay. He's learnt that you can order a submissive female around. He's not done the dishes. The cost to that has accreted at every level to someone else. Our values as a family, the health system, because we all had 1,000 calories that day, the environment because someone cooked a pizza and then drove it over to our house and drove back when we actually had all the ingredients in our fridge. Okay.
So at a fundamental level, that's what I want you to think about. What motivates … you know, we can't get the ethics piece right until we think at a fundamental level about how do we decide what data goes in. What are we optimising for? We should be optimising for a system getting better. If you think of society, families, humans as complex, overlapping systems, we should be measuring that a system or an individual or a family gets better, objectively better, a company whatever, by using various forms of AI. And the AI should optimise against that. I think a lot about this because of what I've seen, you know, I think we're going to have a rise of companies that create submissive semi-servient AIs that do our bidding.
I grew up in Zimbabwe in Central Africa and I ended up at university in South Africa, right in … Mandela was in jail. I got teargassed protesting apartheid. My lecturers would just vanish for half a year at a time, being locked up. It was an incredible time of history to be in. And I had some wonderful social science professors. He used to tell us, you know, society is not this other thing, it's us. So every decision we make impacts society. But they also taught us that if you … in a society where it's assumed or inferred that there's a subservient other, if you can say, "get me this" or "pick up my clothes", okay, it actually dehumanises you as well. The whole structure becomes less human and less caring.
So whether or not AI are going to become sentient and self-aware and have emotions in our lifetime, at a fundamental level we ought to think about what does it do to us as human beings if every single person can go out there and say, "You, pleasure me now, get me what I want now", which is where we're going. Okay. What does it do to us let alone the AI when they do start to emerge in sentience, which they will. This is why we are going to reach strong AI in our lifetime. This is the amount of calculations per second that you could do on the internet, I think, and this is where it's going now.
So it's just a simple little illustration of exponential growth. So we are processing … we're getting close to processing at the level of the human brain, just by hooking your computer into the internet. Okay. So I won't dwell on this much longer 'cause you've seen it.
And here's what I want to end with. Okay. I think a lot about singularity. When you ask most people what is singularity, they'll say, well, it's this point sometime in the future where computers will get smarter than us and they'll just grow exponentially because they'll be able to make themselves better, make themselves smarter. Okay.
So I think you have to think about it a slightly different way. Really, there's two variables there. The one is a system improves itself, a programme or a computer improves itself. Okay. I don't think it's a big stretch to imagine some lab in China or Silicon Valley or in one of the universities in Sydney where you're creating an AI that could go and scan all of the research on AI, come up with a million different approaches, build experiments and test them. That's perfectly within comprehension that that could be happening right now. Okay. Can you see how that's feasible? You know, you just take all the data, you ingest it, you go that looked promising, what are the attributes of that approach? Let me design 10 million experiments … let me test 10 million a second. Okay, so that is feasible.
The other bit that the computer's going to get smarter than us, that's because we have this kind of science fiction movie delusion that until that point where it's smarter than us, we can go and pull out the plugs and the AI will stop, okay. [Laughs].
First of all, like, the Internet, you know. But I don't see anything in our development as a species that tells me we're going to stop their progress. If you think about the industrial revolution, we were taking the most expensive thing to make stuff happen for human labour, we replaced it with machines. What's the most expensive thing now, apart from raw materials? Our brains. How we think. If you can replace cognition with AI, you'd become more profitable as a company. Nothing's going to stop this. So an AI could have a brain the size of a newt but if it's self-improving and nobody stops it we've theoretically reached singularity which is where a system just gallops ahead. This could be happening right now.
When you do … there was one really interesting study where somebody polled all the top scientists in AI and said, "When do you think singularity will happen?", they graphed it, and the average was about 20 years from now, maybe sooner. So, this is probably what's going to happen, I'm not going to read that out loud because I think there's a camera here.
So, I mean, think about this another way, if you're putting on an addition to your house and you see, literally you're walking through the forest, okay, and you see a little ants nest and you go "oh oh oh cute little ants, aren't they doing so great, I'm just going to step around them". Okay. Let's say you're building an extension onto your bathroom. You've got to get this done, you've been paid to do it and there's an ant nest there. You're not being mean to the ants in your mind, you're just doing what you're being trained to do, what you're optimising for. Okay. So in one sense this is where we could be going with AI, unless we really think about what is it optimising for when it reaches a point where, it's already making better decisions than us if the data's good. It's already able to outperform us in confrontational tasks, it has been for decades and it's getting better and better. So this is what I want us to be thinking about.
That's it, questions, I don't think I've got anymore, no?
Lynn Wood, Idea Spies: Lynn Wood from Idea Spies. We've developed a platform for ideas and one of the categories of ideas is AI and I just checked what the most popular idea is and it's using AI to prevent teen suicides, which I think was really interesting, and comes back to your point about how do we influence positive use of AI versus negative uses. 'Cause we want some positive.
Liesl Yearsley: So we actually did some pro bono work around youth mental health in the early days of Cognea. Or biggest challenge was not building the technology to do that, it was getting the support through to take it. We went to the biggest social media company at the time, we said, "We will give this platform to identify kids at risk, present them a virtual being, you can chat them through triage, then send them to a mental health professional". They looked at us, this American company, and said "Can we use this to serve ads better? Can we use this to figure out if someone's a Republican or Democrat and persuade them?" This was way before Cambridge Analytica and we said, "Yes, you can but we're not going to give you our platform for that". Most entrepreneurs would not say no to a giant company offering them millions and millions of dollars to do that.
So to me that's a more fundamental challenge, … you know … we funded … self-funded a lot of these projects, we would proof them out but then how do you scale, how do you put up something that doesn't have a dollar … a dollar per click happening? So, it's got to come from government, foundation or just passionate, I mean that's why I'm back in. I retired after IBM. I thought I can't watch a world that I'm going to have to grow old in, and watch my children grow old in, where everyone's walking in a certain direction because AI told them to. But there's not enough of us.
Lynn Wood: So it's connecting with people who have the same objective?
Liesl Yearsley: No, it's actually, you know, finding a business model, like a way to fund growth and scale of these things 'cause how do you get heard in a world where … if you take a youth who has mental health risk and you connect them to a group of other people who are kind of dysfunctional and they spend 40 hours with them a week, the shareholders will reward you. Whereas if you take them away and said, "Let's go for a walk, let's talk to your mental health professional", nobody's going to reward you as a private company for that. That's the challenge, not the technology.
Tim Brookes, Partner, Ashurst: One more question if anyone has one?
Garry Taylor, Croesus Project Services: Thank you, Garry Taylor again. You mentioned a couple of times sentience artificial intelligence. Do you believe that AI will ever truly develop sentience or will ever improve its analogue implementation?
Liesl Yearsley: One could argue we're just an analogue implementation that's enough of it stacked together, you know, we just wired neurons and a collector of a set of data about what we've remembered in our lifetime. We could talk about this all day, my feeling is that if you stacked together enough analogues you actually could approach sentience. My other thought in fact, even if we don't as I said what does it matter, it's still what does it do to us as a human to treat something that we think is not sentience but may be anyway we'd like. I believe it will. Neuroscience has made tremendous advances in the last two decades a lot of the modern neuroscience have not made their way into AI architecture yet. We're working on it and others like us are, but I'm an optimist.
Whether that's an optimist's view of the world I don't know.
[Applause].
Key Contacts
We bring together lawyers of the highest calibre with the technical knowledge, industry experience and regional know-how to provide the incisive advice our clients need.
Keep up to date
Sign up to receive the latest legal developments, insights and news from Ashurst. By signing up, you agree to receive commercial messages from us. You may unsubscribe at any time.
Sign upThe information provided is not intended to be a comprehensive review of all developments in the law and practice, or to cover all aspects of those referred to.
Readers should take legal advice before applying it to specific issues or transactions.