Online Safety

Government introduces Online Safety Bill in Parliament and announces Online Advertising Programme consultation

Published on 21st Mar 2022

Following a steady trickle of announcements of changes to the draft Bill, the government has introduced the Online Safety Bill to Parliament, hot on the heels of the recent announcement that the long-delayed consultation on the regulation of online advertising is now open
 

Close up view on a man typing on a keyboard, working with a desktop and two laptops

First introduced in 2019 by the Online Harms White Paper, the first draft of the Online Safety Bill was published in May 2021 (see our Online Safety update for an overview of the first draft of the Bill and our OSB in focus series for more in-depth analysis about its provisions). It was then reviewed by the Joint Committee in December 2021. Since then there have been a series of government announcements heralding the imminent arrival of the Bill in Parliament, which finally happened on Thursday 17 March 2022.

We will be publishing future insights into the updated draft of the Bill as it progresses through its legislative stages. Here we summarise some of the key changes that the new version incorporates.

Higher stakes for senior management 

The press release accompanying the government's confirmation that the Bill has been introduced to Parliament made it clear that it will take a tougher stance on senior managers/executives at those platforms which are caught by it.

Senior managers whose in-scope companies fail to cooperate with Ofcom’s information requests could now face prosecution and possibly imprisonment within two months of the Bill becoming law, instead of two years as previously drafted. 

Furthermore, a range of new offences have been added to the Bill to make senior managers criminally liable for destroying evidence, failing to attend or providing false information in interviews with Ofcom, and for obstructing the regulator when it enters company offices.

Treatment of 'lawful but awful' content 

The government's press release states that platforms will only be required to tackle "legal but harmful" content – often referred to as lawful but awful content, such as exposure to self-harm, harassment and eating disorders – that is set by the government and approved by Parliament. This appears to be the case for content that is harmful to adults (for which the obligations in the original draft were less onerous than content that is harmful to children) however it does not appear to necessarily be the case for content which is harmful to children.

The updated draft retains the concept of certain categories of legal but harmful content (for both adults and children) being specifically designated in secondary legislation and ditches the previous complex definition of what might constitute legal but harmful content. Nevertheless, in particular for "content presenting a material risk of significant harm to an appreciable number of children in the UK", even if this content is not specifically designated, there are still numerous obligations and duties. 

Given that in-scope services will still need to consider the effect of certain types of non-designated harmful content on child users, freedom of speech advocates may still have concerns about over-removal of content.

'Priority illegal content'

In February, the government announced that the draft Bill would be "strengthened to stamp out illegal content". What this means in practice is that the government has specifically named 11 further categories of offence – in addition to terrorism, child sexual abuse and exploitation content – which have been named in the Bill itself (rather than in secondary legislation) as "priority illegal content". 

The new categories include assisting suicide, online drug and weapons dealing, people smuggling, revenge porn, fraud, and inciting or controlling prostitution for gain. With a categorisation of priority illegal content, in-scope services will need to take more proactive measures to prevent users from being exposed to these types of content, including having proportionate systems and processes to prevent individuals from encountering such content in the first place, minimising the time it is accessible and swiftly taking it down after being alerted of its presence or otherwise becoming aware of it.

The government's logic is that by naming these offences in the Bill itself, in-scope services will be under pressure to have the appropriate measures in place from day one, and Ofcom will be able to take quicker enforcement action, without having to wait for secondary legislation to be passed.

More control for users 

Two new duties of care have been added, aimed at giving customers of the largest user-to-user services tools to enable them to limit exposure to unwanted communications and content. These new duties, described in the Bill as "user-empowerment" duties, apply to "Category 1" user-to-user services with the largest number of users, and include:

•    a new duty to include, to the extent that it is proportionate to do so, features which adult users may use to increase their control over harmful content; and 

•    a duty to include features which adult users may use to filter out "non-verified" users. 

This is in response to the well-documented abuse that people, particularly celebrities and politicians, currently experience online from anonymous "trolls".

Fraudulent paid-for ads included

Following significant efforts from campaign groups, the scope of the Bill has been extended to include fraudulent paid-for adverts. 
 
Previously, only user-generated fraudulent content was covered by the first draft of the Bill, which campaigners had urged would simply encourage criminals to shift strategy, moving into paid-for advertising. While paid-for advertising generally remains carved out from the other duties of care set out in the updated draft Bill, the government announced that it would add a new legal duty. The duty would require the largest and most popular user-to-user services, as well as designated search services, to prevent paid-for fraudulent adverts appearing on their services, and, in the case of user-to-user services, to remove them promptly where they manage to appear despite such measures. 
 
Ofcom is expected to set out more details of what platforms will need to do to fulfil the new duty. Measures are expected to include: (1) requiring companies to scan for scam ads before they are uploaded; (2) checking identities of people posting ads; and (3) ensuring financial promotions are only made by companies authorised by the Financial Conduct Authority. 
 
Ofcom will be responsible for checking whether or not companies have adequate measures and systems in place to fulfil the duty of care, but will not be responsible for checking individual pieces of content.  

There is some concern from commentators that this extension may make it unclear which regulator will lead on the policing of ads, as responsibilities could fall somewhere between the remits of Ofcom, the Financial Conduct Authority and the Advertising Standards Agency – and potentially also the Online Advertising Programme.

Underage access to pornography

On Safer Internet Day, in February, the government announced that the draft Bill would be significantly strengthened with a new legal duty requiring all internet service providers that have links with the UK and publish pornography (subject to limited exemptions) to put robust checks in place to ensure their users are 18 years old or over. 

The onus appears to be on in-scope service providers themselves to decide how to comply with their new legal duty, with age verification being only an example of how to do this. The intention here is to future-proof the legislation by not mandating the use of specific solutions.

New criminal offences biting on individuals 

Also in February, the government trailed that the new Bill would set out three new criminal offences, following the Law Commission's recommendations. These three new criminal offences in the updated Bill aim to capture a wider range of harm and methods of communication, as the offences encompass emails, social media posts, WhatsApp messages and so-called "pile-on harassment". Importantly, these criminal offences will apply to individuals in both private and public communications, significantly broadening the scope of the Bill, which previously focussed only on the responsibilities of online service providers.

The new offences are: 

  • A threatening communications offence, where communications are sent or posted to convey a threat of serious harm. This is intended to capture online threats to inflict physical or sexual violence or cause serious financial harm, and in particular is intended to offer protection for public figures as well as victims of coercive and controlling behaviour. 

 

  • A harm-based communications offence to capture communications sent to cause harm without reasonable excuse. This offence aims to make it easier to prosecute online abusers, as it is based on intended psychological harm and will consider the context or pattern of the communication. The government has said that the element of intent is intended to protect the right to free expression, as a communication that is "offensive" but not sent with the intention to cause harm will not be captured by the offence. This is perhaps an attempt to address the uncertainty for service providers created by the draft Bill, which requires service providers to protect freedom of expression while simultaneously removing harmful content. However, it is not clear at this stage how a service provider would be expected to go about assessing such intent at scale given the sheer volume of content they need to deal with.

 

  • An offence for when a person sends a communication they know to be false with the intention to cause non-trivial emotional, psychological or physical harm. This new offence covers false communications sent with the intention of inflicting harm, such as false bomb threats. The government has drawn a distinction between an intent to inflict harm and users sending misinformation under the false but genuine impression that the content is true. The former would have to be proved for the offence to apply.


The Bill also inserts into the Sexual Offences Act 2003 a new criminal offence of "cyberflashing". This involves perpetrators sending an unsolicited images or films of their genitals to others, including via electronic means such as social media, dating apps, or functionality such as Bluetooth or Airdrop. Offenders who do this for the purpose of their own sexual gratification or to cause the victim humiliation, alarm or distress may face up to two years in prison.

Online Advertising Programme consultation

Forming a companion-piece of sorts to the Online Safety Bill, a separate Online Advertising Programme consultation was also announced earlier this month, with much less pre-emptive fanfare than the Online Safety Bill.
 
While the Online Safety Bill has been amended to tackle the issue of fraudulent paid-for advertising, the programme will look to ensure that other organisations across the value chain play a role in reducing the pervasiveness of fraud, in addition to other harms created by online advertising. As such, there appears to be a move away from only holding advertisers accountable, instead creating an increasing accountability across the supply chain. Intermediaries, platforms, and publishers (among others) are proposed to be subject to specific transparency and accountability measures.
 
The consultation is limited to paid-for ads, so owned media and websites are out of scope. Political ads also remain out of scope. There is a particular focus on the needs of vulnerable consumers and also so-called "high risk" adverts, such as alcohol, medicine and gambling.  Harmful body image portrayals, discrimination and harmful misinformation are also in scope.
 
The government has laid out a number of options for the level of regulatory oversight that could be applied across the value chain in order to improve transparency and accountability in the ecosystem. What could be implemented ranges from a continuation of the self-regulatory framework through to full statutory regulation.

The consultation will close at 11:45pm on 1 June 2022.

Why this matters

Now that the Bill has been introduced to Parliament, we have a better sense of how the final version will look. Given the scrutiny it will bring on platforms and services within scope, not to mention the potential it brings for fines of up to £18 million or 10 per cent of annual global turnover, online services providers will need to start thinking about how the Bill, as proposed, may bite on their services. 

That said, the Bill still does have some way to go before coming into force. It needs to pass through the parliamentary process before becoming law, during which time we expect further changes will be made. 

If you would like to discuss these issues further, please contact one of our experts listed below or your usual Osborne Clarke contact.
 

Follow

* This article is current as of the date of its publication and does not necessarily reflect the present state of the law or relevant regulation.

Connect with one of our experts

Interested in hearing more from Osborne Clarke?