The UK government has announced the publication of its draft landmark Online Safety Bill, designed to protect users of online content-sharing platforms from harmful material.
The content of the Bill broadly follows the direction of travel announced in the government's full consultation response published in December 2020, which we summarised here. However it also contains a few controversial late entries, particularly in relation to user-generated online scams and additional protections for political speech.
Key features of the Bill include:
- a duty of care on in-scope companies to improve the safety of their users online. For most companies caught by the Bill, this duty of care will extend to acting against illegal content and activity online (for instance terrorism content, or content involving child sexual exploitation or abuse) as well as content which may be harmful to children (such as pornography or violent content). But for the biggest online content sharing sites - referred to as the "Category 1" services - there will also be duties to act against content which may fall below the threshold of a criminal offence, but is harmful to adults too (such as content about eating disorders, self-harm or suicide);
- requiring platforms to specify in their terms of service how users will be protected from illegal or harmful content and enforce their standards in a consistent manner;
- requiring platforms to have effective and accessible user reporting and redress mechanisms (including in relation to wrongful takedown); and
- Ofcom's designation as independent online safety regulator, with appropriate enforcement powers, including fines of up to £18 million or 10 per cent of annual global turnover.
While several of the key talking points from the government's full consultation response in December appear to have flowed through to the draft Bill – including attempts to ensure freedom of expression in journalism by excluding from its scope "news publisher content" (as well as user comments on such content) and in broader contexts by requiring platforms to have effective mechanisms for users to challenge the take-down of their content – it also contains a few unexpected inclusions.
- A particularly surprising addition to the draft Bill – not least because it does not appear to have been the subject of a full consultation - is the duty on Category 1 services to expressly protect content which is "of democratic importance". This is broadly defined as content specifically intended to contribute to democratic political debate in the United Kingdom or a part of it and, in the words of the government, would include content "promoting or opposing government policy or a political party ahead of a vote in Parliament, election or referendum, or campaigning on a live political issue". The protections required by the Bill will need to be applied consistently and objectively, regardless of the platforms' political stance, based on transparent and accessible terms and conditions. This is likely to be of huge significance to larger social media platforms when it comes to content moderation – indeed it is unprecedented to impose duties on online platforms in relation to the type of content they should not remove. With the backdrop of major social media platforms banning Donald Trump's posts and accounts following riots in Washington earlier this year, it is easy to see how the requirements of the Bill could end up causing controversy, with the government potentially opening up a can of worms as to whether political speech should be distinguished from other important forms of free speech.
- While the Bill includes some expected safeguards requiring Category 1 services to ensure that the importance of journalistic content shared on their platform is taken into account (particularly in terms of how and whether this content can be accessed), the definition of "journalistic content" is very wide. It extends to user content "generated for the purpose of journalism", as long as there is a UK link regarding readership or interest. Indeed, the government's press release noted that "Citizen journalists’ content will have the same protections as professional journalists’ content". There will certainly be debate as to whether a given user post should be deemed as journalistic in nature, and it seems feasible that this may be a line of argument pursued by vocal users looking to find ways to challenge take-down of their content.
- One notable omission from the government's full consultation response in December had been online scams and fraud. The response noted that other mechanisms are likely to be more effective in the fight against online fraud, and so "harms resulting from fraud" were said to be excluded from the intended scope. This saw a coalition of industry voices step up their campaign to call on the government to include it. To an extent, this pressure has worked, as the government's press release states that measures to tackle user-generated fraud are contained in the Bill. It uses the examples of romance scams – whereby victims are tricked into sending money or personal information to others they meet through online services – and fake investment opportunities posted on social media. It is worth noting that the draft Bill focusses on fraudulent behaviour carried out via user-generated content on online services. While user fraud and scams do not appear to be specifically called out in the draft Bill, it would seem that the government's interpretation is that they would be captured by the broader obligations on platforms to protect against illegal content or, in the case of Category 1 companies, content which could result in material risk of significant adverse physical or psychological impact on users. In this regard, fraud via advertising, emails or cloned websites would still fall outside the scope of the Bill.
Next steps for the Bill
Now that draft Bill has been made public, it will be subject to pre-legislative scrutiny and so has a long journey ahead of it. We have known that the government was minded to carry out pre-legislative scrutiny of the Bill since it was suggested by the DCMS minister in April 2020, so this is not surprising.
Draft bills are usually scrutinised by a Parliamentary select committee over a period of months, following which the Bill can be subject to further amendments. Committees will frequently invite oral or written evidence from concerned organisations, and public consultation on the draft bill is often conducted in parallel.
However, the fact that the Bill is only in draft means that timings could be pushed back even further. Bills that go through pre-legislative scrutiny are generally introduced in a subsequent Parliamentary session, which means that the Bill wouldn't normally be formally introduced until the next session (unless the government accelerates the process). In some cases there can even be a public consultation on a draft Bill before pre-legislative scrutiny begins, which would see the Bill's formal introduction being pushed back even further.
All this indicates that the government, while positioning itself as a bold pioneer, is also showing a lot of caution, and is well behind the curve, with Bills covering similar ground already well-advanced in other territories, such as Ireland and Australia.
How we can help
Osborne Clarke has a dedicated international Online Safety team which is following legislative developments in numerous jurisdictions in this area, including at European level through the Digital Services Act. Please contact one of the experts below to find out more.