When it all goes wrong: the ethics of artificial intelligence decision making – Q&A with Tae Royle
Artificial intelligence is a powerful tool for making quick and accurate decisions. However automated decision making which directly affects the rights and obligations of individuals can go badly wrong. In this article, Tae Royle, Head of Digital Products APAC explores how lawyers can navigate ethical challenges in artificial intelligence powered decision making.
Artificial intelligence gets a lot of coverage in the press. But what does the term really mean?
There is a lot of clever software out there, but it is not artificial intelligence unless it is capable of learning. Artificial intelligence means that the software can progressively learn and improve over time by training.
At its heart, artificial intelligence is a just a statistical model. The system takes a set of training data and designs a statistical model that fits that training data. You can then ask the artificial intelligence a question, and it will consult its model and give you a predicted answer. The quality of the answer depends on the quality of the model, which in turn depends on the accuracy and representative nature of the training data. The system can do a great job of answering questions that match the training data and model, but will tend to be caught out by questions which fall outside the scope of the training data.
Where are some areas of concern in relation to artificial intelligence?
Artificial intelligence powered decision making is a rapidly growing area. If you have a business that needs to make thousands of routine decisions, such as forecasting the amount of stock required for a retail store, assessing whether a fleet vehicle requires preventative maintenance or deciding the amount of credit to extend to an individual, then you have a legitimate interest in trying to make the best possible decisions. You also want to reduce the cost and time of making those decisions.
Artificial intelligence is incredibly powerful because it can make a large number of decisions very quickly by reference to its statistical model. It can also adapt its model over time to changes in market conditions by incorporating additional training data.
However, use of artificial intelligence as a decision maker can also go badly wrong. In particular, things tend to go wrong when the people using the system do not understand the limitations of the system, for example they fail to understand that the system's 'decision' is simply a prediction based on a statistical model or fail to take into account the limited scope of the underlying training data.
Can you provide some real world examples of the sorts of challenges raised?
An officer of the Victoria Police was recently quoted in the press regarding an artificial intelligence tool used to track a group of 40 repeat youth offenders:
"We can run that tool now and it will tell us – like the kid might be 15 – it tells how many crimes he is going to commit before he is 21 based on that, and it is a 95% accuracy,” one senior officer told Weber. “It has been tested.”
This statement is concerning because the officer appears to be treating a statistical data modelling tool with an unwarranted degree of deference. The model may be able to predict youth at the risk of reoffending, but predictions become increasingly problematic the more granular, and the farther into the future they are made. Self-fulfilling prophecies are another risk – a troubled youth told they that there is a 95% chance they will be a prolific future offender may live up to that expectation.
In Connecticut Fair Housing Center, et al. v. CoreLogic Rental Property Solutions, LLC, March 25, 2019, a single mother in rental accommodation was refused permission for her disabled son to move in with her. Her son had been injured in an accident which left him unable to walk, talk or care for himself. In a computer says 'no' scenario, CoreLogic refused to provide reasons for the refusal. It was later revealed that the reason was a shoplifting charge against the son which had been dropped, and which had occurred some years prior to the accident.
The relevant District Court found that CoreLogic had discriminated against the son, and had prevented the landlord from redressing the situation by withholding key information.
There are a range of other relevant examples, but they often come down to a handful of key issues:
- a failure to allow people adversely affected by a decision to understand the basis on which the decision was made and to appeal the outcome;
- a failure to properly supervise outcomes of the artificial intelligence tool; and
- a failure to properly train the people charged with supervision of the tool to understand the limitations of statistical predictions, statistical models and training data.
What ethical obligations do lawyers have in relation to these challenges?
In most jurisdictions, lawyers have positive duties to be competent, to effectively supervise and not to discriminate.
A duty of competence would require lawyers who are advising in relation to artificial intelligence tools to understand how those tools work. While they are not technical experts, they should understand that responses from artificial intelligence are predictions based on statistical models. They should understand, at least at a high level, where the training data came from and its limitations.
A duty of supervision may include an obligation to supervise artificial intelligence used in the lawyer's practice. They need to understand the technology well enough to be able to comply with their other ethical duties such as ensuring that work product provided to clients is accurate and complete. They should also understand when it would be inappropriate to use the tool.
Lawyers have a duty not to engage in discrimination. Artificial intelligence tools may act in a biased manner if their training data contains biases, the person training the tool has biases or if the tool receives external feedback which is biased. Accordingly, a lawyer's duty not to discriminate may require the lawyer to understand how responses are generated, and to take positive steps to ensure that any biases in training data or processes is addressed.
What practical steps can we take to mitigate against these risks?
As legal professionals, we can take positive steps to address ethical risks associated with artificial intelligence powered decision making. These steps can be grouped in three broad categories:
- training data: ensure that training data is representative, collected responsibly and processed to mask protected attributes;
- transparency and accountability: ensure that key stakeholders understand the design and impact, including trade-offs between simplicity and accuracy (fit) and ensure that system designers and supervisors can explain the design and function in lay terms; and
- control and review: rigorously test artificial intelligence powered decision making to ensure that it is not affected by bias, ensure accountability for ongoing supervision of outcomes and a clear communication channel with a timely and effective appeal mechanism.
Things tend to go wrong when the people using the system do not understand the limitations of the system, for example they fail to understand that the system's 'decision' is simply a prediction based on a statistical model