Liability
The complexity and behaviour of AI sparks a number of potential issues. One issue requiring careful consideration is legal liability. There is little specific legislation allocating liability where the use of AI has caused damage or loss. This may be set to change, as the White Paper on AI released by the European Commission, and the European Parliament resolution on a civil liability regime for artificial intelligence, contemplate amendments to the current Product Liability Directive. In addition, the European Parliament recommends the adoption of a new regulation setting out the liability rules to apply to AI operators
Contract and tort have their own specific rules. Understanding the applicable AI based risks, how they are likely to be assessed under the current legal framework, and how this might change, is therefore key to any organisation thinking of implementing AI. This section identifies a number of those risks and the liability issues associated with them. It also sets out an overview of the current legal framework for liability in relation to contract, tort and statute.
Identifying AI based risks that could result in liability
AI based risks can be inherent in the technology being adopted, the environment in which it is being deployed, and may also arise in the implementation or AI procurement process.
Examples include:
Design: Certain types of AI (in particular, unsupervised machine learning and deep neural networks) autonomously learn a model to create a data output. The various stages in between the first input and the last output can be difficult to decipher and are often unreadable by humans.
As humans are unaware of the logic used to create the final output, it can be difficult to predict how such systems will behave in practice, leading to unexpected outcomes such as erroneous decisions which could result in economic loss or even physical harm. Further, issues can arise simply from flaws in the initial design of AI systems, stemming from human oversight.
Performance and use: AI may be subject to quality and performance issues in the same way as products and services in the traditional sense, namely faults; poor performance; unavailability; and malfunction. These could have a number of causes ranging from latent defects to incorrect use by the operator, including failure to carry out maintenance or implement updates or patches as necessary.
“Corner case” situations: These occur where a system encounters a problem or situation outside its normal operating parameters or procedures.
Vulnerability: Without adequate security, the potential for criminal misuse of digital systems and data is high. AI systems are subject to particular vulnerabilities which do not apply to conventional systems. In addition, the technology could be abused, possibly for facilitating large-scale or complex cyber-attacks such as “spear-phishing” where emails from an apparently trustworthy sender are used to steal data or install malware on target networks.
Complexity: The number and interdependency of the components of AI systems - the tangible parts, the software or applications, the data, the data processing – means that it will be increasingly difficult to identify the source of any problems. This will be compounded where a number of participants (system operators, developers, data suppliers and platform providers) are involved in the production process with the result that it will frequently be unclear where responsibility lies. As will be seen, this could be an issue where causation is key to determining liability.
The framework of legal liability
Specific UK legislation dealing expressly with liability for damage or loss arising from the use of AI (at the time if writing) is limited to the Automated and Electric Vehicles Act 2018. As a result, in most sectors liability will be determined in accordance with the current legal regime related to contract, tort and statute and will reflect the technology involved, the damage or loss suffered and the relationship between the relevant parties.
Contractual liability
Liability in a contractual setting
The law sets out certain principles governing liability where the parties are in a contractual relationship: for example, between organisations in a supply chain; between a business which has bought in AI-based services and the service provider; or between a supplier and the end consumer.
Common law position
The general position is that, on a breach of contract, the non-breaching party will be entitled to damages of an amount calculated to put it in the position it would have been in had the contract been properly performed. If it suffers physical injury or loss or damage to property it can also recover damages for loss of this kind. In some cases, the damages payable may also cover incidental costs or wasted expenditure resulting from the breach.
Any award of damages will, however, be subject to the rules on causation (i.e. there must be a causal connection between the defendant’s breach of contract and the claimant’s loss in that the breach must have been the effective, or dominant, cause of that loss), remoteness (foreseeability of loss) and the requirement for a claimant to mitigate its losses.
Contractual options
Given the current legislative uncertainty surrounding the allocation of liability where the AI fails, it is important that any organisation contracting to buy in or implement AI considers contractual warranties, indemnities and limitations for each of the parties at the outset.
Parties must also ensure their compliance with the Unfair Contract Terms Act 1977, regulating attempts to exclude or limit liability in contract in a business-to-business setting. Parties may wish to consider provisions related to indemnities, liquidated damages and service contracts and clauses related to product recall.
Tortious Liability
Tortious liability does not depend on there being a contractual relationship between the defendant and the claimant. The tort of negligence, which is the most common cause of action, will arise where there is a duty of care (in other words a duty to take reasonable care not to cause harm) which is breached, resulting in matters such as damage to property, physical harm, financial loss and breach of confidence. Principles of causation, remoteness and mitigation, as per contract law above, also apply.
Sometimes, a duty of care in tort exists alongside a contractual relationship between the parties and in such a situation concurrent liability can arise. A claimant will then be able to choose whether to frame their claim in contract or in tort depending on which is more advantageous.
Statutory Liability
There is very little legislation at either UK or EU level addressing specific issues of liability for AI and its use. No specific legislation has been introduced in the UK to directly clarify the AI liability regime, other than in relation to fully autonomous driving under the Automated and Electric Vehicles Act 2018. That said, there are a number of statutes that will be relevant to product liability in the context of AI based risks.
Consumer Protection Act 1987
The Consumer Protection Act 1987 is of general application and makes manufacturers strictly liable for damage caused by 'defective products'. A product is defective if 'the safety of the product is not such as persons generally are entitled to expect.' In determining this, the courts will take into account instructions and warnings that accompany the product and how the product might reasonably be expected to be used. There are various defences, including compliance with UK or EU law, and a 'state of the art' defence which is assessed according to the state of scientific and technical knowledge at the relevant time.
This defence could be relevant to complex and innovative AI systems, particularly those which use software based on self-learning algorithms, such as artificial neural networks, where the bug is not expressly implanted by a programmer but arises endogenously from the operation of the learning algorithm. A manufacturer could argue that the system performed correctly during extensive testing and it was effectively impossible to predict the particular event, or series of events, that led to the injury.
Product Liability Directive (85/374/EC)
At an EU level, the Product Liability Directive (85/374/EC) (Directive) imposes a system of strict liability allowing consumers who suffer harm to claim compensation from the producer without having to show fault.
The General Product Safety Regulations 2005 give effect to the Product Liability Directive and are relevant to all products for consumer use unless sector-specific safety requirements apply.
The Directive is currently under review by the European Commission to establish whether it is still fit to address civil liability claims in a digital environment. As part of this revision it is envisaged to expand the definition of "products" to include digital content and digital services, and to adapt existing concepts such as "damage", "defect" and "producer". For instance, the concept of "producer" would be changed to include manufacturers, developers, programmers, service providers and back-end operators.
New EU regulation on AI civil liability - 'High risk' AI
The European Parliament has also proposed a new regulation to set out the rules for civil liability claims of natural and legal persons against operators of AI-systems. This new regulation would apply to the territory of the European Union where a physical or virtual activity, device or process driven by an AI system has caused harm or damage to life, health, physical integrity or property or has caused significant immaterial harm resulting in a verifiable economic loss.
Under this new regulation a twofold liability regime would be introduced consisting of "high-risk" and "low-risk" systems. An AI system would be considered high-risk when its autonomous operation could cause significant potential harm or damage to one or more persons in a manner that is random and goes beyond what can reasonably be expected.
The common principle for operators of both high and low-risk AI systems is that they cannot escape liability on the ground that the harm was caused by an autonomous activity, device or process driven by the AI system. Operators of high-risk AI systems would be subject to a strict liability regime and would solely be able to exonerate themselves in the case of force majeure. To the contrary, operators of low-risk AI systems could escape liability if they can prove that the harm or damage was caused without fault.
The regulation would also provide that operators of a high-risk AI system will need to have appropriate liability insurance with adequate cover, taking into account the amounts specified in the proposed regulation. The liability of the operator for high-risk AI systems would be capped at: (i) EUR 2 million in the event of death or harm to a person's health or physical integrity, and (ii) EUR 1 million for damage to property or significant immaterial harm that results in a verifiable economic loss. Limitations in time would depend upon the type of damage, without prejudice to national law regulating the suspension or interruption of limitation periods.
The European Commission's legislative proposal of the new regulation is expected to be issued during the first quarter of 2021. In the meantime, organisations intending to deploy or invest in AI should continue to monitor both UK and EU guidelines, reports and any proposed legislation or regulation from both government and industry bodies. This is particularly important if the AI being deployed or developed would be categorised as 'high-risk' under the EU's proposed regulation.
See the Ethics section for further information on the future plans for AI based regulation in the context of product liability.
Current at 20 November 2020
Key Contacts
We bring together lawyers of the highest calibre with the technical knowledge, industry experience and regional know-how to provide the incisive advice our clients need.
Keep up to date
Sign up to receive the latest legal developments, insights and news from Ashurst. By signing up, you agree to receive commercial messages from us. You may unsubscribe at any time.
Sign upThe information provided is not intended to be a comprehensive review of all developments in the law and practice, or to cover all aspects of those referred to.
Readers should take legal advice before applying it to specific issues or transactions.