Regulating big tech platforms: Content moderation requirements in the UK Online Safety Bill and the EU Digital Services Act

Policy paper

This paper outlines the policy background that has taken the EU’s Digital Services Act (DSA) and the UK’s Online Safety Bill down their respective legislative journeys and examines how law-makers have responded to the policy challenges in each jurisdiction, and highlights some important differences in the legislative approach.

uk-parliament_web.png

Read the full dossier "Digital rights post-Brexit - Regulatory divergence in the UK and the EU"

 

  1. Introduction
  2. The policy background
  3. The legal framework
  4. Content moderation and freedom of expression
  5. Concluding remarks – different jurisdictions, different mandates?

 

1. Introduction

This paper draws a comparison between two important new laws aimed at regulating “big tech” (the most dominant companies in the information technology industry): the EU’s Digital Services Act (DSA)[1] and the UK’s Online Safety Bill.[2] 

Both laws aim to tackle the obligations and responsibilities of large online platforms with regard to content posted by users and how it is moderated. Both put in a place a framework for platforms to take down or restrict prohibited content.

This paper outlines the policy background that has taken these two laws down their respective legislative journeys and examines how law-makers have responded to the policy challenges in each jurisdiction, and highlights some important differences in the legislative approach.

2. The policy background

2.1 Digital Services Act

The Digital Services Act is grounded in the  EU Treaty obligations. As such, it must take account of the framework of Single Market rules that apply to digital services,  as well as the EU Charter of Fundamental Rights. However, that is only part of the story. Its genesis has also been influenced by a policy discourse around online content that dates back to at least 2008. At that time, the issue concerned copyright infringement over peer-to-peer file-sharing networks. The debate came to a head over a proposed amendment to a major piece of telecoms law, known as the 2009 Telecoms Package, calling for persistent file-sharers to be cut off the Internet by terminating or suspending their accounts. It was resolved in the European Parliament with an amendment that required a “prior, fair and impartial” hearing before any action could be taken to suspend or terminate users’ accounts.[3]

These debates had their roots in human rights discourse in Europe, notably regarding the right to freedom of expression. The issues evolved in the context of the Anti-Counterfeiting Trade Agreement (2012),[4] the net neutrality regulation,[5] [6] and the Copyright Directive (2019).[7]  A number of the challenges that arose in that time resurfaced as the EU Digital Services Act made its journey from the initial proposal into law. The discourse is well-developed in the three main institutions of the EU[8] and there is an established consensus around fundamental rights in this policy field.

2.2 Online Safety Bill

The Online Safety Bill is part-way through its legislative journey, with an anticipated conclusion in 2023. UK law is now separate from EU law and outside the jurisdiction of the European Court of Justice (ECJ). There  is also the possibility of the UK leaving the European Convention on Human Rights, which could undermine the promotion and protection of human rights in the UK.

The Online Safety Bill is one of a number of new laws that impact the human rights framework, along with the Police, Crime, Sentencing and Courts Bill,[9] the Data Protection and  Digital Information Reform Bill,[10] and the Bill of Rights.[11] 

The Online Safety Bill arose out of a policy discourse that began around 2008 about protecting children online. Over the years, the discourse has expanded to address other policy areas, including but are not limited to terrorism, violence against women, and online abuse.

3. The legal framework

3.1 EU Digital Services Act

The EU Digital Services Act resulted from a need to review the 20-year-old E-Commerce Directive (2000/31/EU) that governed the activities of online services. The aim was not to have an EU-wide content policing system, but to have a consistent framework that Member States could apply in order to ask online platforms to restrict certain categories of unlawful speech. The law set out the parameters of the regime to create consistent cross-border rules to reduce the burden on providers to restrict certain categories of content and provide a level playing field.

The three provisions from the E-Commerce Directive that are key to creating consistent cross-border rules have been retained in the Digital Services Act: 

  1. Mere conduit (ECD Article 12 /DSA Article 4) means that broadband providers are neutral carriers and have no concern with the content.
  2. No general monitoring (ECD Article 15/ DSA Article 8) means that service providers may not be asked to systematically  monitor content or seek out specific types of content.
  3. The hosting provision (ECD Article 14 /DSA Article 6) shields hosts and online platforms from liability for illegal content on their platforms, unless they have ‘actual knowledge’ of it (in which case they must take it down.)

The Digital Services Act takes a horizontal approach to the categorisation of online services, establishing a layered framework of responsibilities. Both mere conduit and hosting services are in scope, as well as caching services, which store copies of files. The obligations for content moderation fall on hosting services and online platforms. Online platforms have a range of additional responsibilities, including more sophisticated reporting networks and transparency requirements for recommender systems.

A new provision in the Digital Services Act related to content moderation is a  “notice and action” procedure. A proposal of this type was first put forward by the European Commission in 2012,[12] but it was subsequently shelved. Updating the E-Commerce Directive was deeply controversial. Since the adoption of the Directive, the growth of cloud services and social media platforms meant that the landscape of services that might or might not fall under liability exemptions drastically changed. The amount of user-generated content also increased exponentially and more governments sought to put pressure on companies to implement voluntary measures against alleged illegal or ‘harmful’ content. Attempts to harmonise liability exemptions and content moderation rules through reform of the Directive presented an opportunity for introducing safeguards, but risked a one-size-fits-all solution that avoided addressing the societal impacts of changes in digital services. The question of how to regulate online content made little progress until in 2016 when the European Commission issued a Communication that sought to address “illegal” and “harmful” content.[13]  The Digital Services Act concentrates on “illegal” content and largely steers clear of regulating “harmful” content.

The new law also distinguishes what it calls the Very Large Online Platforms (VLOPs), which are defined as those with more than 45 million average active monthly users.  It obliges these very large platforms to carry out risk assessments to identify any systemic risk arising from their services[14]and to put in place mitigation measures to tackle these systemic risks.[15] The definition of systemic risks is broad and includes for example illegal content, hate speech, privacy violations and election manipulation etc.

The Act does not define what the illegal content would be, as the EU does not have competence over criminal law. Instead, it will be defined by the Member States who will establish the precise content to be restricted according to their national law.

In the Digital Services Act, online platforms will not have to make the assessment of illegality, but will be asked to act on reports or notices submitted to them, which is referred to as the “notice and action procedure”. The entity submitting the notice will have to provide a “reasoned justification” for why content should be removed. They are expected to act diligently, according to a uniform set of rules, to protect the right to freedom of expression as guaranteed by the Charter of Fundamental Rights. There are provisions to sanction those who submit egregious complaints.

3.2 Online Safety Bill

The Online Safety Bill is premised on the UK’s departure from the EU. The ‘no general monitoring’ provision is no longer part of UK law,[16] and neither is the mere conduit and liability shield provisions found in the DSA.

The UK has introduced what amounts to a general monitoring requirement in Clause 9. The provision is potentially interpreted as an upload filter. An upload filter intercepts content while the user is uploading it onto the platform, and checks to see if there is a match for illegal content. This would occur continuously for all content When it finds a match, it removes the content, effectively making it disappear before the user even knows what has happened. Hence, this would be prior restraint[17] or an action by the State that prohibits speech or other forms of expression before they become instances of public regard.  

The Bill creates three categories of online service, which are loosely defined on the face of the Bill.The final criteria – and hence which services are applicable – will be determined by Ministers after the Bill has been adopted.

  • Category 1 services are widely assumed to be the large online platforms.
  • Category 2A is likely to be search services.
  • Category 3B will be all other online services.

The Bill excludes a few services such as one-to-one voice calls, but broadly includes all services where users can communicate with each other and share content.

The Bill requires Internet companies to monitor their platforms for illegal content. It additionally creates a new concept of ‘legal but harmful’ content in Clause 11 (content harmful to children)  and Clause 13 (content harmful to adults). Separate risk assessments are required for children and adults online safety, and for illegal content.

For illegal content, the Bill requires online platforms to identify the prohibited content and remove it. It may be reported to them or they may be required proactively seek it out. It requires them to assess the illegality of the content. In this regard, it grants them a wide discretion to make that assessment (what they would reasonably consider illegal).  The assessment of illegality has to be determined against a list of 28 criminal offences [See Schedules 5, 6, and 7, and Clauses 151, 152, 153]. However, the Bill does not define how the offence would manifest itself in a social media post. 

The Bill does not require online platforms to take any restrictive action against harmful content, but it does require them to say in their terms and conditions what they will do for each type of harmful content. Ofcom, the regulator, is required to oversee their compliance. The Bill gives 4 options for action:

  1. take down an individual piece of content;
  2. suspend the user’s account;
  3. limit its distribution (shadow ban)[18] ;
  4. allow the content, but do nothing with it (e.g. not promote it). 

The assessment of harmfulness has to be made against criteria to be established by government Ministers after the Bill has been adopted into law. According to an indicative list in a Ministerial Statement issued in July, harmful content will include eating disorders, legal suicide content, promotion of self-harm, harmful health content that is demonstrably false and online abuse and harassment.[19] For children, it also includes pornography.

These measures have been criticised for being too vague, excessively broad, or imprecise,[20] and for not allowing users to foresee the consequences of their actions. Moreover, the Bill would grant a wide discretion to the online platform who will assess their behaviour.[21]  

Online platforms will not only judge the lawfulness or harmfulness of the content, but also the severity of the punishment. In legal jargon, they will determine the proportionality of the measures taken to restrict the content. In this context, it means, how light or heavy-handed the restriction should be.

4. Content moderation and freedom of expression

A key policy challenge when seeking to address content moderation is freedom of expression. This is an area that is handled differently in the two laws.

4.1 EU Digital Services Act 

The EU’s Digital Services Act explicitly recognises the right of European citizens to exercise their rights as guaranteed in the Charter of Fundamental Rights, where Article 11 is the right to freedom of expression. It recognises that the measures for tackling illegal content risk interfering with this right. This is informed by the decade-long discourse outlined in Chapter 2 of this paper (Policy Background).

Online service providers are asked to fulfil their mandate in a way that respects freedom of expression rights. The notice and action regime governs the restriction of content. The Digital Services Act does not oblige online platforms to monitor all content. They are asked to act in a “diligent, objective and proportionate manner in applying and enforcing” the restrictions [Article 14(4]. Similar language is repeated throughout the text, for example, the  “timely , diligent and non-arbitrary” processing of notices [Recital 52]. Online platforms are expected to follow these rules even when restricting content under their own terms and conditions. The European Commission is asked to conduct a 5-yearly evaluation of the measures in the Digital Services Act on the right to freedom of expression (Article 91).

The  Digital Services Act incorporates robust  “ex ante” (up-front) and “ex post” (after the fact) procedural safeguards against the possibility that the measures could take down or restrict users who have acted lawfully. These include:

  • Online platforms must notify users whom they are restricting. The notification must provide information on the content being restricted, its location (Article 16) and a reasoned statement of justification (Article 17), as well as how they may appeal the restriction, including the option for judicial redress (Article 17).
  • Online platforms are expected to deal with complaints in a timely manner (Article 20).  Users must have a possibility for an effective appeals process, and a right to file their appeal within six months of the restriction being imposed.
  • The Digital Services Act sets up a system of “trusted flaggers” who report illegal content [Recital 61].  These trusted flaggers are entities which have demonstrated particular expertise and competence and will be vetted by new regulatory bodies, known as Digital Services Co-ordinators. Any trusted flagger who egregiously submits unfounded or inaccurate reports could have their account suspended [Article 22].[22]

The Digital Services Act  does not include an exemption for media from content moderation rules, however, this is the subject of a separate legislative initiative.

4.2 Online Safety Bill

The Online Safety Bill calls on online platforms to have regard for the right to freedom of expression “within the law”. Users who feel any restriction of content is unjust have the possibility to access a complaints procedure [Clause 18], however, the requirements for that procedure are currently unspecified. It says that users may bring a claim for breach of contract [Clause 19(4)] if their content is restricted in breach of the platform’s terms of service.  Compliance will be regulated via a Code of Practice drafted by Ofcom.

The Bill includes an exemption for media (referred to as ‘news publisher content’), who will be protected from the platforms’ content moderation actions, provided they comply with certain criteria, for example, that they are registered with press or broadcast regulatory body. These media will have rights to redress in cases where their content is wrongfully restricted.

However, one of the most evident differences in the way the two laws would operate is that the Bill does not incorporate redress rights for ordinary users, and there are no procedural safeguards or right to take a complaint to court. Additionally, there are no constraints on those who are filing the reports of content to be restricted.

Some of the measures themselves are in conflict with freedom of expression. For example, the measures to restrict illegal content include a requirement to “prevent users from encountering” the content (Clause 9). This could be interpreted as a form of prior restraint, which is currently prohibited in English law, and in a technological context, it could be an “upload filter”.

The Online Safety Bill also includes a requirement for private services to monitor and moderate content. This is widely interpreted as meaning that encrypted services are in the scope of the Bill and that they could be required to circumvent the encryption, although this is not explicitly stated. This is significant, as encryption is one of the building blocks of digital technology and secure online commerce. Encrypting personal data protects against interception by a third party whilst the data is in transfer. Undermining encryption could make our private communications unsafe.[23] The Digital Services Act  does not include a requirement of this type, however, it is the subject of a separate legislative initiative.

5. Conclusion

Differing approaches taken in the EU’s Digital Services Act and the UK’s Online Safety Bill for tackling the responsibilities of online platforms with regard to content moderation reflect differences in the policy challenges and policy background in each jurisdiction.

As noted in another paper in this series, Creating a Coherent Strategy for Digital Policy,[24] regulators monitor the implementation of legislation rather than impose their own vision.  Through the Online Safety Bill, the UK will place a great responsibility on the regulator (Ofcom), while simultaneously retaining executive control of the policy priorities. The tensions are obvious - the executive will have the power to delineate the framework for content removal, meaning that the potential for both the executive and regulator to appear less than impartial are significant.

These differences in legislative approach also risk impacting the Internet eco-system as a whole, for they raise questions about how online platforms deal with potentially conflicting mandates in different jurisdictions.

General monitoring of online content is forbidden in the EU, but will be expressly mandated in the UK, affecting all platforms that operate in both jurisdictions. This also raises risks for users’ content and loss of service availability in one  jurisdiction.

Different approaches to categorising online services could lead to an online platform having a different set of legal obligations in each jurisdiction, raising questions around how they should approach implementation.

The potential for EU standards to dominate is discussed in other contexts in our paper on Institutional Challenges,[25] and applies equally here. Until now, the US Digital Millennium Copyright Act (DCMA) has been the de facto route for issuing complaints to online platforms about copyright infringement. If the EU notice and action procedures in the DSA become the de facto standard for other types of complaints about wrongfully removed content or accounts, what happens when content is lawful in one jurisdiction and unlawful in another?

These are not purely technical or legal matters. Freedom of expression, for example, is an important area that is handled differently in the two laws, and ultimately these decisions affect millions of people across Europe and the UK whose rights the State is supposed to guarantee.

Moreover, some content moderation issues arise out of platform power and a digital environment designed by corporations in support of corporate aims. The problematic attention market (business models that focus on capturing and monetising users’ attention) that has driven development of recommendation engines and user profiling has created some of the issues that the DSA and Online Safety Bill are trying to solve. As noted in Strengthening Competition Policy as a Tool for Platform Regulation[26] in this series, a more appropriate aim may be to move platforms away from that attention market and incentivise a shift towards a system based on user preferences.

The main takeaway should be that the policy choices made by legislators, as exemplified by these two laws,  will have ramifications for law makers, regulators and millions of ordinary users for many years to come.

 

Acknowledgments: We would like to thank Dr Michael Veale, Augustine Leong and Aishwarya Kaiprath Shaji for their contribution to this briefing.

 

[1]The EU Council of Ministers (Council) formally adopted the Digital Services Act (DSA) on 4 October 2022. On 19 October the president of the European Parliament and the president of the Council will sign the DSA. After being signed it will be published in the Official Journal of the European Union and will start to apply fifteen months after its entry into force. As such we have made reference to the Act as outlined here: https://www.europarl.europa.eu/doceo/document/TA-9-2022-0269-FNL-COR01_EN.pdf

[3]  Horten, Monica 2012 The Copyright Enforcement Enigma - Internet Politics and the Telecoms Package Palgrave Macmillan

[5] Marsden, Christopher T 2017 Network neutrality: From policy to law to regulation https://library.oapen.org/handle/20.500.12657/31900 

[9] Police, Crime, Sentencing and Courts Act  https://www.legislation.gov.uk/ukpga/2022/32/contents/enacted

[10] Data Protection and  Digital Information Reform Bill   https://bills.parliament.uk/bills/3322

[12]A clean and open Internet: Public consultation on procedures for notifying and acting on illegal content hosted by online

intermediaries

[13] Online Platforms and the Digital Single Market Opportunities and Challenges for Europe COM(2016) 288/2

[14]Digital Services Act, Article 26.

[15] Digital Services Act,

[18] Horten, M, 2022, Algorithms Patrolling Content: Where’s the Harm? https://ssrn.com/abstract=3792097

[22] Husovec and Rocha Husovec and Roche Laguna 2022 Digital Services Act: A Short Primer  https://private-law-theory.org/?p=44753