Over the last couple of years various UK and EU bodies and regulators have provided guidance and declarations on building best practice ethical and trustworthy AI. This guidance provides valuable insight into the future regulation of AI across the UK and Europe. It is important that organisations investing in, or implementing, AI keep updated with these developments as they are likely to reflect the future regulatory landscape in which AI will be required to operate.
Ethics guidelines
The EU has released a set of non-binding Ethics Guidelines (Guidelines) with a view to steering organisations towards creating trustworthy AI. It sets out three components that must work together in harmony to create trustworthy AI; it should
- comply with applicable laws;
- be ethical; and
- be robust.
The Guidelines also provide a framework to support businesses in achieving trustworthy AI (focusing on the foregoing second and third components). Included in this framework are seven key requirements to realising Trustworthy AI: human agency and oversight, technical robustness and safety, privacy and data governance, transparency, diversity, non-discrimination and fairness, environmental and societal wellbeing and accountability. An organisation should use technical and non-technical methods to ensure implementation of the foregoing requirements (noting that the Guidelines contain an assessment list to assist with such implementation).
From a reputational perspective, organisations may want to consider signing up to the guidelines to promote customer confidence in their AI systems.
European Commission White Paper on AI
The European Commission held a public consultation on AI between February and June 2020 aimed at promoting the uptake of AI. It subsequently released the 'White Paper on Artificial Intelligence – a European approach to excellence and trust' focusing on three topics:
- actions for support, development and uptake of AI;
- future regulatory AI framework options; and
- safety and liability of aspects of AI.
In July 2020, the European Commission announced a number of preliminary trends observed from the responses to the public consultation on the White Paper on AI; in particular that respondents supported revisions to the Product Liability directive to cover risks engendered by certain AI applications and the introduction of a new regulation setting out the liability rules to apply to AI operators. See the Liability section for further information on the revision of the Product Liability directive.
CDEI AI Barometer Report
In June 2020, the UK Centre for Data Ethics and Innovation (CDEI) released the AI Barometer, a report analysing the most pressing opportunities, risks and governance challenges associated with AI and data use across five key UK sectors including Financial Services, Digital & Social Media and Energy & Utilities.
Of note are the following three barriers to ethical AI and data use that the report pinpoints as requiring close attention, accompanied by CDEI examples of potential mitigating action:
Barrier |
mitigation action |
Low data quality and availability |
Investing in core national data sets, building secure data infrastructure, trusted data sharing mechanisms, ethical data regulation
|
Need for coordinated policy and practice |
Development & coordination of policy, defining & aligning industry and regulatory standards
|
Lack of transparency around AI and data use |
Requirements for public disclosure and independent audit |
In the 12 months from June 2020, the CDEI plans to expand the AI Barometer to look at further sectors, as well as promote its findings to policymakers and other decision-makers across industry, regulation and research. The CDEI will also work with partners in the public and private sectors to launch a new programme of work aimed at addressing the barriers that the AI Barometer identifies.
Guidance on explaining AI based decisions
In May 2020 the ICO and the Alan Turing Institute developed practical guidance on 'Explaining decisions made with AI', with a view to increasing trust in AI. The guidance recommends-
- selecting priority explanations by considering the domain, use case and impact on the individual,
- collecting and pre-processing data in an explanation aware manner,
- building the AI system to ensure the organisation is able to extract relevant information for a range of explanation types,
- translating the rationale of the AI system's results into useable and easily understandable reasons,
- preparing implementers, through training as an example, to deploy the AI system, and
- considering how to build and present the explanation to an individual (including contextual factors and deciding what information to provide).
Best practice guidance for data protection compliant AI
In July 2020, the ICO released 'Guidance on AI and data protection'. The guidelines contain recommendations on good practice for organisational and technical measures to mitigate the risks to individuals that AI may cause or exacerbate, as well as auditing tools and procedures to assess how the AI tool complies with applicable laws.
Whilst there are no penalties for non-compliance with this guidance, organisations must ensure that they maintain compliance with applicable data protection laws.
Current at 20 November 2020