Automated correspondence not necessarily a 'decision' which can be relied on
Pintarich v Deputy Commissioner of Taxation
Summary
The Full Court of the Federal Court of Australia in Pintarich v Deputy Commissioner of Taxation [2018] FCAFC 79 found that a taxpayer remained liable for interest charges on a tax liability despite receiving a computer generated letter from the Deputy Commissioner of Taxation purportedly remitting the taxpayer’s liability.
The Court held that the statement in the computer generated letter could not be relied on since there was no related mental process involved in the relevant component of the letter being issued.
The judgment has clear implications for administrative law and governments' increased reliance on document automation and artificial intelligence for decision making processes. It serves as a reminder of the possible risks of taking automated correspondence at face value where it purports to convey a 'decision' and creates uncertainty around the ability to rely on automated decision making processes.
Facts: The computer-generated letter
The taxpayer, Mr Pintarich, had a tax liability which included a General Interest Charge (GIC) component. A delegate of the Deputy Commissioner for Taxation, following conversations with the taxpayer's accountant, input information into a computer-based ‘template bulk issue letter’ which generated a letter (the computer-generated letter). The computer-generated letter was sent to the taxpayer from the Australian Taxation Office (ATO) in December 2014 purporting to remit a significant portion of the GIC liability provided a lump sum payment was made before a set date. The delegate did not review the contents of the letter before it was sent. The taxpayer proceeded to pay the lump sum.
After payment of the lump-sum, further statements of account for the GIC were issued to the taxpayer. In May 2016, a second Deputy Commissioner wrote to the taxpayer advising that he had made a determination that the taxpayer's request for full remission of GIC had been denied and only a partial remission of the GIC owing would be granted. The taxpayer viewed this later 'decision' as ultra vires, as a decision had already been communicated to him in the computer-generated letter. The Deputy Commissioner later advised the taxpayer that the portion of the computer-generated letter that purported to remit the GIC liability had been included in error. The Deputy Commissioner therefore contended that no decision on the application for remission of GIC was made at the time of the computer-generated letter.
Issue: Was there a 'decision'?
In essence, the taxpayer sought to challenge whether the computer-generated letter from the ATO purportedly remitting the taxpayer’s GIC amounted to a ‘decision’ by the Commissioner of Taxation.
Majority Judgment: A 'decision' requires a mental process of reaching a conclusion
In a 2:1 decision, the majority of the Full Federal Court (Moshinsky and Derrington JJ) found that in order for there to be a 'decision' to remit GIC there needed to be both:
- a mental process of reaching a conclusion; and
- an objective manifestation of that conclusion.
The finding of the primary judge that the delegate had not undertaken a process of deliberation, assessment or analysis to decide whether to grant the taxpayer’s application for remission of GIC liability was unchallenged. On this basis, the majority found there was no mental process of reaching a conclusion concerning the remission of GIC when the computer-generated letter was sent, despite the letter being an objective manifestation of the conclusion. Accordingly the Court held that no decision was made and the Deputy Commissioner was not bound to what was conveyed by the computer-generated letter.
This approach prevailed despite the majority acknowledging that the Court's decision may be perceived as unfair and create administrative uncertainty about relying upon communications with government agencies, as in their view there was a relatively small likelihood of a similar data entry error being made by the ATO in relation to other decisions in the future.
Dissenting Judgment: Renouncing an objective 'decision' for lack of subjective mental process may undermine administrative law
In a very practical judgment, Kerr J (in dissent) made the case that allowing decision makers to renounce conduct as not being a 'decision' where there is an inconsistency between their own or their officers' subjective mental processes and the objective expression of those processes would completely undermine the fundamental principles of administrative law. The determination as to whether a decision has been made, his honour argues, "requires close assessment of whether the circumstances in which the conduct said to be, or not to be, a decision arose was within the normal practices of the agency and whether the manifestation of that conduct by an overt act would be understood by the world at large as being a decision."
Justice Kerr reasoned that determining whether a decision has been made must be fact and context specific and where a decision maker's subjective mental process is different to the objective manifestation of a decision, it should not cease to be a decision for that reason alone as there are circumstances where decisions are made without any explicit mental engagement.
Justice Kerr also disagreed that this scenario was unlikely to arise in the future given "the growing interdependency of automated and human decision making" and that there is reason to be concerned with the majority giving licence to such unfairness.
High Court refused leave
The High Court has since refused the taxpayer’s application for special leave to appeal on the basis the proposed appeal had insufficient prospects of success. This appears to be a missed opportunity for the High Court to clarify the ability to rely on 'decisions' made by way of automated correspondence.
Implications for AI/Automation
The conclusions in this case emphasize the challenges of using traditional administrative law principles to scrutinise automated decision making processes. The increased use of automation, designed to ease administrative burden and make administrative decisions, means that the uncertainty created by this case may have broader effects on the introduction and reliance on these technologies in administrative decision making, particularly as governments seek to further embed artificial intelligence into decision making tools.
In the past, automation has helped humans apply rules to individual cases. However, progressively automated systems are becoming the primary administrative decision makers. It is important to acknowledge that difficulties will arise as automation collapses decisions into rulemaking, making it nearly impossible to determine whether a decision resulted from factual errors or distorted policy. Full automation of decision making without the counter-balance of human mental process thus arguably runs the risk of dismantling critical procedural safeguards at the foundation of administrative law.
In particular, procedural fairness may be impaired where computer programmers change the substance of rules when translating them from complex human language into binary computer code. This is a particularly difficult task and shades of meaning may be lost or distorted, even where computer programmers have a good understanding of statutory construction. Data input error can also lead to incorrect decisions. More problems may then arise in verifying that a program correctly recorded the rules, for example hearings may not provide individuals with the opportunity to meaningfully challenge automated decisions where expert testimony about a computer system's reasoning is unavailable.
The pervasive uptake of machine-learning algorithms and automated decision-making tools by governments around the world highlights the likelihood of increasing numbers of errors with underlying human inputs to such tools in the form of coding or data input errors. This goes against the majority's view that this type of scenario is unlikely to occur more frequently in the future.
The use of these systems also raises important questions as to the measures necessary to ensure the legality of decisions they make. Currently authority to use such systems is not always transparent or express. Increasingly, express authority to make decisions is given to automated systems by legislative schemes through such mechanisms as deeming a decision made by a computer program a decision of a human decision maker. An example of this is the Business Names Registration Act 2011 (Cth). Despite this, it is by no means clear that this is being dealt with comprehensively and such deeming provisions require the acceptance of high level constructs of decision making processes.
Nonetheless, as Kerr J contended (in dissent), the "hitherto expectation that a ‘decision’ will usually involve human mental processes of reaching a conclusion prior to an outcome being expressed by an overt act is being challenged by automated ‘intelligent’ decision making systems that rely on algorithms to process applications and make decisions."
Justice Kerr also acknowledged that "what was once inconceivable, that a complex decision might be made without any requirement of human mental processes is, for better or worse, rapidly becoming unexceptional. Automated systems are already routinely relied upon by a number of Australian government departments for bulk decision making."
Large-scale automation systems which employ coded logic are now unavoidably relied upon in the decision making processes of many Australian government agencies. The use of such systems in Australia parallels increasing budgetary constraints of government agencies alongside rapid growth in the volume and complexity of legislation and the government decisions it requires. These systems are now well established, with high level guidance on automated decision making's alignment with administrative law being introduced by the Administrative Review Council in 2004. This guidance was the catalyst for the Commonwealth Ombudsman's Better Practice Guide to Automated Decision Making released in 2007. This guide aimed to assist agencies with the implementation of automated systems and acknowledged that automated systems can effectively make decisions. Thus where human decision makers have consciously setup processes for how and when automated decision making tools can be relied upon, it appears illogical to suggest that a human mental process for reaching a conclusion is required to make that decision.
Accordingly, Kerr J reasoned that the legal conception of what constitutes a decision must not be static and should comprehend that technology has altered how decisions are in fact made and that aspects of, or the entirety of, decision making, can occur independently of human mental input. In light of this, there is now uncertainty on what constitutes a decision when automation is a central part of the administrative decision-making process.
Authors: Tim Brookes, Partner and Mitchell Bazzana, Graduate.
Key Contacts
We bring together lawyers of the highest calibre with the technical knowledge, industry experience and regional know-how to provide the incisive advice our clients need.
Keep up to date
Sign up to receive the latest legal developments, insights and news from Ashurst. By signing up, you agree to receive commercial messages from us. You may unsubscribe at any time.
Sign upThe information provided is not intended to be a comprehensive review of all developments in the law and practice, or to cover all aspects of those referred to.
Readers should take legal advice before applying it to specific issues or transactions.