Competition
This section summarises how competition law applies to the use of AI, post-implementation, by organisations seeking to digitise their business.
Over recent years, many competition authorities around the globe have been grappling with the extent to which current competition laws and policies are appropriate for protecting competition in the digital era, with different jurisdictions taking different approaches.
By way of legal background, competition law in most jurisdictions will prohibit:
- agreements which prevent, restrict or distort competition by object or effect; and
- abusive conduct by dominant companies which restricts or distorts competition.
For example, in EU law these prohibitions are set out in Articles 101 and 102 of the Treaty on the Functioning of the European Union respectively. For ease of reference, these prohibitions are referred to below as Article 101 and Article 102.
Collusion
Whilst there are many potential positive benefits of AI (for example, reducing transaction costs for firms, personalising products and services and giving consumers greater information on which to base their decisions), it can also be used to facilitate illegal coordination between competitors in breach of Article 101. For example, AI might be used to facilitate:
Explicit Coordination to Implement Cartels |
---|
There are already a number of examples of where regulators have found organisations to be using rudimentary AI to explicitly coordinate conduct in beach of competition law. For example, in 2016 the UK Competition and Markets Authority (CMA) penalised firms which used automated re-pricing software to align the prices of their products sold online through Amazon. In addition, in 2018 the European Commission (Commission) imposed significant fines on consumer electronics manufacturers for fixing online resale prices using pricing software. The use of pricing software has also been considered by the European Court of Justice, which held that a form of cartel was created where restrictions on maximum discounts were programmed into online booking system software used by Lithuanian travel agencies. In Asia-Pacific, Singapore and Australia, regulators are also alive to the competition law risks that can arise from the use of pricing algorithms. For example, the Australian regulator has acknowledged that such algorithms may be used to effectively engage in, and sustain, collusion. |
Explicit Coordination to Enforce Cartels |
The Commission has also stated that use of price-monitoring algorithms to identify where cartel members deviate from agreed cartel pricing could also breach Article 101. |
Tacit Coordination |
Regulators in jurisdictions such as the UK, France, Germany, Australia and Singapore have considered whether algorithms could result in tacit coordination, for example in circumstances where multiple competitors use the same algorithm and so converge towards the same price. |
Self-preferencing
Concerns have also been raised about the use of algorithms by companies dominant in one market, to preference their services in another. For example, the Commission fined Google €1.49 billion for abusing its dominance as a search engine by programming its search results to prioritise its own comparison shopping service.
Personalisation vs discrimination
The UK CMA considered the role of algorithms in personalised pricing in a 2018 research paper. Although the CMA found evidence of algorithms used to personalise search results, advertising and discounts, there was limited evidence of algorithms being used to personalise pricing. However, the CMA noted that use of such tools can be expected in the future, and are most likely to harm consumers in circumstances where, for example, markets have limited competitive constraints and consumers can be divided into small groups (based on their willingness to pay). In such cases, if a dominant player engages in personalised pricing, it might constitute discriminatory behaviour and therefore breach Article 102.
Access
Where an AI-based technology, or a dataset derived from AI-based technology, is owned by a single company and becomes so successful that it becomes an essential requirement for the provision of another service or good, the refusal to license that technology to a third party may qualify as an infringement of Article 102 if that company is dominant.
Ex ante regulation
In many jurisdictions, such as the EU, the UK, Spain and Australia, some stakeholders have suggested that current competition laws do not go far enough in countering some of the above risks, and that ex ante regulation of areas such as digital platforms with substantial market power, and in relation to certain data and technology, should be introduced which might cover areas such as interoperability, non-discrimination obligations, transparency and access rights.
In some sectors, data sharing via common APIs has already been mandated by regulation on competition grounds (for example as part of the UK Open Banking initiative, developed to remedy competition concerns in the retail banking sector).
In terms of AI used in the context of digital platforms, the EU has already introduced new rules aimed at providing businesses with a more transparent, fair and predictable online business environment, and a system for seeking redress (the EU Regulation on platform-to-business relations).
Standard setting
It is not uncommon, as part of the development and growth of new technologies such as AI, that industry standards are set. Where these are agreed by competing market players, care should be taken to ensure that they do not discriminate against, or exclude, certain competitors without objective justification in breach of Article 101.
In addition, a holder of a patent may breach Article 102 if it promotes its technology as an industry standard which is subsequently adopted, while at the same time not disclosing that it holds the exclusive right to license that technology until after the standard is set and/or licensing it on unfair terms..
Responsibility for AI decisions & actions under competition law
Questions have been raised about the culpability of organisations whose AI technology might lead to an infringement of competition law (for example through automated price coordination).
Competition authorities on the whole are clear that, where algorithms are deliberately used to coordinate conduct, the organisation will be held liable for the AI’s actions. The position is somewhat less clear if the AI technology itself is the decision maker and acts entirely independently of the organisation using the software.
The position held by some regulators is that this should be treated no differently than if a rogue employee had been responsible, thus placing culpability on the business. This all means that:
- Compliance training for those adopting and designing such technology is just as important as training the rest of the business; and
- businesses which adopt AI to assist in commercial decision-making (such as price-setting) should also be monitoring the effect of that AI decision-making to ensure that it is competition law compliant.
Current at 20 November 2020
Key Contacts
We bring together lawyers of the highest calibre with the technical knowledge, industry experience and regional know-how to provide the incisive advice our clients need.
Keep up to date
Sign up to receive the latest legal developments, insights and news from Ashurst. By signing up, you agree to receive commercial messages from us. You may unsubscribe at any time.
Sign upThe information provided is not intended to be a comprehensive review of all developments in the law and practice, or to cover all aspects of those referred to.
Readers should take legal advice before applying it to specific issues or transactions.