Legal development

The Online Safety Bill - a radical new approach to regulating online content

Insight Hero Image

    What has happened and why is it significant?

    On 17 March 2022 the Government laid before Parliament a new Online Safety Bill aimed at tackling the rise in harmful content such as online abuse, revenge porn, extremist grooming, self-harm and suicide related content. A draft of the Bill was first published in May 2021. Its introduction to Parliament follows an extensive period of consultation with MPs and industry bodies.

    The UK Government has previously taken a hands-off approach to internet regulation. But with concerns around online safety having intensified over the past decade, the Government has responded with the Online Safety Bill. 

    The Online Safety Bill aims to establish a new regulatory regime to address both illegal and legal, but harmful, content online. Its enactment is significant because it will, for the first time, impose a statutory duty of care on "regulated service providers" such as social media sites and search engines to regulate content on their platforms and to protect users from being exposed to both illegal and potentially harmful, but legal, material. 

    What is the current regime?

    Online safety is currently governed by the EU E-Commerce Regime (Directive 2000/31/EC) and implemented in the UK by the Electronic Commerce (EC Directive) Regulations 2002 (E-Commerce Regulations). This involves a two-tier liability scheme, which differentiates between "online publishers" and "online intermediaries" (such as social media platforms). 

    Online intermediaries currently only have a form of reactive liability, where they are required to remove content upon becoming aware of content which is illegal. Crucially, they are not required to monitor content uploaded to or shared via their platforms. 

    New statutory duty of care

    The Bill will impose a statutory duty of care on online platforms. It is aimed at online platforms which host or publish user-generated content. Regulated services will therefore include social media networks, search engines and video sharing platforms. Once enacted, the legislation will apply to any company which has users in the UK, even if it is not based in the UK. 

    Duty to monitor, prevent and protect

    Depending on their size and function, these regulated service providers will, to varying degrees, have a duty to put systems and processes in place which protect users by limiting or removing any harmful or illegal content. Category 1 companies (the most popular sites amongst users, such as Facebook, Twitter and Google) will have greater obligations placed upon them than Category 2 companies (which will typically be smaller companies but whose users exceed a certain number). 

    Content related to terrorism or child sexual exploitation and abuse is designated "priority" illegal content. Online platforms will be required to respond to this type of content quickly and proactively. Other priority categories include: "communications" offences such as threats to kill; material promoting suicide; harassment and stalking; drug and weapons dealing; people smuggling; and fraud.

    The latest revision of the Bill seeks to address concerns surrounding the difficulties of identifying content which is legal, but harmful. The Secretary of State will be empowered to regulate this type of content through secondary legislation. 

    The Bill will not cover emails and text messages, comments and reviews on content, paid-for advertisements (other than "scam" advertising) or stories published by legitimate news sources.

    At the same time as complying with the new duty to regulate harmful and illegal content, online platforms will need to ensure that they balance this with concurrent duties to protect users' rights to privacy, freedom of expression, journalistic content and content of 'democratic importance'. 

    What penalties will there be and how will enforcement work?

    Under the Bill, Ofcom is granted enforcement powers as the online safety regulator. These include the power to impose fines of up to £18 million or 10 per cent of a company's annual global revenue. 

    Ofcom will be responsible for publishing non-binding Codes of Practice against which internet companies can assess their compliance with the statutory duties.

    Significantly, Ofcom are expected to be empowered to impose criminal sanctions on directors and senior managers for the more serious breaches of duty – for example where a company has failed to implement effective systems to remove harmful content.  

    Freedom of speech v duty to protect

    Arguably the most divisive aspect of the Bill is its focus on the identification and removal of harmful, but legal, content. This has the potential to pose a significant threat to users' freedom of expression online. The Bill will require internet companies to strike a balance between protecting their users from harmful content whilst refraining from systematic censorship of legitimate speech. In this respect, critics have called for better legislative safeguards to be embedded within the Bill in order to ensure this right is not compromised.

    Onerous burden?

    The Bill represents a real headache for online companies. There is a concern that compliance costs will place disproportionate financial pressures upon the industry; especially smaller players. There are concerns that the regulatory risk posed by the Bill will be a barrier to entry, crippling start-up tech companies, and limiting future innovation. 

    Parliamentary scrutiny of the Bill has led to criticism that the legislation is neither robust nor clear enough to tackle certain types of illegal and harmful content. For instance:  

    • Secondary legislation governing harmful content will need to provide sufficient clarity as to the specific types of content which fall within scope.
    • A recent report from the DCMS Select Committee highlighted concerns that the Bill represents a significant "missed opportunity" as it fails to explicitly address serious issues (giving the example of child exploitation through "breadcrumbing").
    • Content pertaining to wider societal wrongs is not within scope, recently exemplified by the spread of misinformation related to Covid-19 across a number of platforms.

    The Bill will make an impact

    Like it or not, this Bill promises to revolutionise online regulation. The introduction of new, wide-ranging statutory duties of care upon internet companies, including social media platforms, has the potential to generate regulatory investigations by Ofcom, and claims, based either on a breach of statutory duty or related to data privacy. 

    The Bill is also likely to present challenges in balancing compliance with the new online safety duties, with (often conflicting) duties to protect user privacy, freedom of expression, "journalistic content" and "content of democratic importance". 

    The nature of online spaces means that potentially millions of users may be affected by a particular harm. It remains to be seen how and to what extent this may give rise to group litigation risks for big tech companies. Any such risks are likely be exacerbated by the super-complaints mechanism that is proposed to be included in the final legislation, which will allow bodies representing the interests of UK users to make complaints directly to Ofcom regarding the features or conduct of a regulated service. 

    Authors: James Levy, Tim West, Imogen Chitty and Brooke Moon

    The information provided is not intended to be a comprehensive review of all developments in the law and practice, or to cover all aspects of those referred to.
    Readers should take legal advice before applying it to specific issues or transactions.