European parliament proposal for a future-oriented civil liability framework
The European Parliament (EP) has clarified its stand on a future regulation of Artificial Intelligence (AI) across the EU. On 20 October 2020, the EP adopted three reports outlining how the EU will regulate AI whilst boosting innovation, ethical standards and trust in technology.
- Ethics framework for AI. The EP has asked the European Commission to present a new legal framework setting out the ethical principles and legal obligations to be complied with when developing, deploying and using AI, robotics and related technologies in the EU, including software, algorithms and data.
- Intellectual property rights. The second recommendation deals with intellectual property rights for the development of artificial intelligence technologies. The proposal specifies that AI should not have legal personality.
- Civil liability regime for AI. The EP's report calls for a clear and coherent EU civil liability regime for AI in order to provide legal certainty for both consumers and businesses and, ultimately, to boost the uptake of AI.
In this article we consider the civil liability regime for AI as envisaged by the EP.
Introduction
In order to address the potential liability issues associated with the use of AI, the EP suggests a twofold approach: on the one hand, a revision of the current EU Directive 85/374/EEC on product liability (PLD) and, on the other hand, the adoption of a new regulation setting out the liability rules to apply to AI operators.
Revision of the product liability directive
In line with the White Paper's recommendation, the EP states that there is no need for a complete revision of the PLD, which has proven to be an effective means of compensating for damage triggered by a defective product. Instead, the EP believes the PLD should be revised to address civil liability claims of a party who suffers harm or damage.
Further, the EP proposes to amend the PLD by expanding the definition of "products" to include digital content and digital services, and adapting existing concepts such as "damage", "defect" and "producer". For instance, the concept of "producer" should incorporate manufacturers, developers, programmers, service providers and back-end operators.
The EP has asked the Commission to assess whether the PLD should be transformed into a Regulation, contrary to a Directive. A Regulation is a binding legislative act which must be applied in its entirety across the EU.
Adoption of a future regulatory framework
a) Scope of application and definitions
The EP proposes that the new regulation should set out the rules for civil liability claims of natural and legal persons against operators of AI-systems. The new regulation should apply to the territory of the European Union where a physical or virtual activity, device or process driven by an AI-system has caused harm or damage to life, health, physical integrity or property or has caused significant immaterial harm resulting in a verifiable economic loss.
It is suggested that under this new regulation, the concept of "operator" includes both the front-end and the back-end operator. The "front-end" operator should be defined as the natural or legal person who exercises a degree of control over a risk connected with the operation and functioning of the AI-system and benefits from its operation. The "back-end" operator should be defined as the person who, on a continuous basis, defines the features of the technology, provides data and essential back-end support service and therefore also exercises a degree of control over the risk connected with the operation and functioning of the AI-system.
b) Different liability rules for different risks
The EP proposes to create a twofold liability regime consisting of "high-risk" and "low-risk" systems. An AI-system is considered high-risk when its autonomous operation could cause significant potential harm or damage to one or more persons in a manner that is random and goes beyond what can reasonably be expected.
All high-risk AI-systems, and critical sectors where they are to be deployed, will be listed in an annex to the new regulation. Given the rapid technological developments, the EP proposes that the Commission should have the ability to review the annex at least every six months.
The common principle for operators of both high and low-risk AI-systems is that they cannot escape liability on the ground that the harm was caused by an autonomous activity, device or process driven by the AI system.
The main difference between operators of high-risk AI-systems and operators of low-risk AI systems would be that the latter can escape liability if it can be proven that the harm or damage was caused without fault. Proof would be required that:
- the AI system was activated without the operator's knowledge, and all reasonable and necessary measures to avoid such activation were taken, or
- the operator acted diligently when selecting the suitable AI-system for the right tasks and skills, putting the AI-system into operation, monitoring the activities and maintaining the operational reliability by regularly installing all available updates.
In summary, operators of high-risk AI-systems would be subject to a strict liability regime and shall not be able to exonerate themselves, save for force majeure.
c) Joint and several liability
If multiple operators are involved, they should be jointly and severally liable, but would have the right to recourse proportionately against each other provided the affected person was compensated in full.
The proportion of liability should be determined by the respective degree of control that the operators had over the risk connected with the operation and functioning of the AI-system.
d) Insurance
Operators of a high-risk AI-system will need to have appropriate liability insurance with adequate cover, taking into account the amounts specified in the proposed regulation. The liability of the operator for high-risk AI-systems would be capped at:
- EUR 2 million in the event of death or harm to a person's health or physical integrity;
- EUR 1 million for damage to property or significant immaterial harm that results in a verifiable economic loss.
Limitations in time would depend upon the type of damage. This is without prejudice to national law regulating the suspension or interruption of limitation periods.
Next steps
With these three legislative recommendations, the EP has opened the discussion on AI regulation and invited the Commission to submit a legislative proposal in line with its recommendations. Following the closure of the public consultation on the White Paper on AI, the European Commission's legislative proposal is expected to be issued during the first quarter of 2021.
Key Contacts
We bring together lawyers of the highest calibre with the technical knowledge, industry experience and regional know-how to provide the incisive advice our clients need.
Keep up to date
Sign up to receive the latest legal developments, insights and news from Ashurst. By signing up, you agree to receive commercial messages from us. You may unsubscribe at any time.
Sign upThe information provided is not intended to be a comprehensive review of all developments in the law and practice, or to cover all aspects of those referred to.
Readers should take legal advice before applying it to specific issues or transactions.