Legal development

Cyber security and systems safety – UK Code of Practice issued

spiral background

    Rapid development of AI has always gone hand in hand with safety concerns and the debate on how best to address these continues. The EU ban on most harmful AI practices is now in force and the UK Government has issued a Code of Practice for the Cyber Security of AI (the Code). The Code aims to boost business and public confidence in AI through minimising cyber threats and is seen by the Government as promoting the UK as a world leader in the AI field. The Code is a voluntary framework, intended to develop into baseline security standards which will apply globally, not just UK-wide. It is aimed at all AI systems – including genAI – except those used purely for academic research. It aims to provide clarity across the entire lifecycle of AI, with the spotlight on the role of developers and operators in underpinning the security of systems. Despite the non-binding status of the Code, it has potential to shape industry standards, and is clearly designed to strike a balance between security and innovation – and increasing the transparency of AI systems, through audits and assessments, may well enhance trust in AI technologies.

    The Code is centred on 13 principles which fall into five separate phases, to reflect the AI lifecycle:

    • Design. Security, as well as functionality and performance, should be in-built. Systems should be designed to withstand adversarial AI attacks, unexpected inputs and systems failure and unnecessary functionality should be avoided as it can lead to increased risk. Data custodians should be involved at the design stage to match the intended use of the system with the type of data it will use. Any external providers engaged to work on system design and development should ideally comply with the Code.
    • Development. Developers should evaluate threats and manage risks to the AI system throughout this phase. The Code highlights the use of paper trails to document asset lists, security risks and mitigation strategies, which should be regularly reviewed and should align with the risk appetite of the operator (as opposed to the end-user). Any security assessments should take into account vulnerabilities within the supporting infrastructure and supply chain, including the cloud. End users should be made aware of any threats which developers cannot resolve.
    • Deployment. Risk management is key. As well as the need for ongoing monitoring at a systems level, human involvement is important, spanning everything from human oversight of AI systems in use to training for staff on how to use the system, security awareness, and identifying and reporting security risks. Prohibited use cases should be highlighted and addressed. Disaster recovery and business continuity plans should deal specifically with AI. Compliance with data protection legislation is of course essential while the system is in use.
    • Maintenance. Developers are urged to provide security updates and patches for their AI systems when needed, with major updates treated as new versions and tested and evaluated before going live. For operators, the Code focuses on the need to monitor the behaviour of their systems and user compliance. Monitoring performance over time can flag up any changes – either sudden or gradual - which could impact security.
    • End of life. Security remains vital when an AI system is decommissioned. This is an area where the input of data custodians will ensure that relevant data and configuration details are dealt with correctly.

    The Code was published at the end of a week in which a lack of cyber resilience was highlighted in the public sector, against a background of widespread unpreparedness for cyber attacks among UK businesses generally. At the same time, the EU AI Act ban on the most harmful AI systems has come into force, accompanied by lengthy Guidelines to aid consistent enforcement. This prohibition – the first aspect of the AI Act to come fully into force – is a small part of a comprehensive legal framework which some have criticised as too restrictive and likely to slow innovation and adoption in the European AI sector. Although the UK is promised a Cyber Security and Resilience Bill to apply more widely across the digital economy, the current approach towards governance seems to remain sector-focused and light touch in a clear drive to accelerate AI in a bid to boost national productivity. Maybe a global standard will prove the best solution for what is ultimately a global technology.

    The information provided is not intended to be a comprehensive review of all developments in the law and practice, or to cover all aspects of those referred to.
    Readers should take legal advice before applying it to specific issues or transactions.