‘The safest place in the world to go online’. This is the ambition for the UK, set out by the Government in the 2019 Online Harms White Paper, and echoed over the next two years as the proposals evolved into the now Online Safety Bill: ‘a milestone in the Government’s fight to make the internet safe‘.
While there has been scattered applause, many asked: what does it mean to be safe online?
The publication of a draft online safety bill, which promises to protect both safety and freedoms online, is at least a step change from ‘business as usual’. Platforms will be held to account over what up until now has been entirely their domain: from their content curation systems to the enforcement of their terms of service. There are resonances with the Digital Services Act - a focus on removal of illegal content, risk management, and transparency. However, the remit of the DSA is rather broader. Though the EU has set out that one of the aims of the DSA is intended to promote user safety, protection of users’ fundamental rights is also a clear priority: which include but are not limited to freedom of expression, in recognition of the importance of the pursuit of ‘dignity, freedom, equality and solidarity’.
Reactions from civil society in the UK are varied: there is cautious optimism, but not without serious caveat. Some fear that the Bill will fail to go far enough to protect vulnerable users, particularly children. Organisations such as Hope Not Hate, Glitch and the Anti-Semitism Policy Trust (and Demos) have expressed concern that the Bill will fail to tackle extremism, hate and abuse online. At the same time, the Save Online Speech coalition, which includes Big Brother Watch, the Open Rights Group and Global Partners Digital, warns that the Bill ‘risks creating one of the most censorious online speech regimes in the Western world’ and fundamentally undermining the privacy of our communications.
The problem facing the Bill is that safety means a lot of different things to a lot of different people. What human rights groups think of as ‘safe’ online is very different to the idea of ‘safe’ that the security services use. What it means for a politician to be safe online is different to what it means for an activist to be safe online, which is different again, to what it means for a footballer to be safe online. A one-size-fits-all approach to safety risks defaulting to the lowest common denominator - keeping people safe from things which threaten everybody, and leaving them to fend for themselves when facing more specific or complex threats. What it means for marginalised groups to be safe online is different to the generic concepts of safety that centre on the allegedly-neutral ‘average user’.
One particular and complex threat - that the Bill is silent on - is gendered disinformation.
Gendered disinformation campaigns target women online, weaponising hate, rumour and gendered stereotypes to undermine, attack and discredit women, particularly those in public life - politicians, journalists, activists. Gendered disinformation threatens women’s safety, privacy, and shuts down women’s freedom of expression and political participation online. Examples range from faked nude images of women politicians being shared, to women being denounced as liars for speaking out against harassment, to women being portrayed as traitors and threats to the stability and safety of an authoritarian country. These threaten democracy around the world, and are an ever-increasing feature of online political disinformation. Addressing the harms caused by these campaigns requires nuanced policy interventions at all stages of platform design and operations, including but reaching far beyond better (both less and more) content moderation.
But the Online Safety Bill risks overlooking this threat, for a crucial reason: it is by no means clear what, and whose speech is considered protected under the Bill: how it understands the right to freedom of expression and where its limits are. Whether gendered disinformation is protected or prohibited will be a difficult test case for a regulator.
Some gendered disinformation will cross the line into illegal speech - but likely to be little, especially as ‘gendered’ is not yet a category of hate speech in the UK. As such, only moves to tackle legal but harmful content are likely to directly target a large proportion of gendered disinformation. But the platforms which will be required to act on legal but harmful content will be limited to a very few select platforms - likely to be the most high-reach ones such as Twitter and Facebook.
And if a large platform is deemed inhospitable to extremist views, users will flock to other platforms (as we saw in the US after the Capitol attacks). These are likely to be smaller, fringe platforms, which may be out of scope of a regulator. Here, extremist misogyny can flourish and coordinate: but as long as their speech remains just on the side of legal, platforms will not be required to act on it. Failing to act on harmful and violent content, such as gendered disinformation, has a silencing effect on its targets, which will undermine online freedom of expression for many. But conversely, under the Bill’s regime, platforms could be critiqued for infringing on freedom of expression if they moderate content beyond what the regulator requires.
Indeed, critics of the Bill say that requirements to remove or hide legal content by enforcing platforms’ terms of service will lead to infringements on freedom of expression. The Bill seeks to counter this with protections in the Bill for ‘democratically important content’ and ‘journalistic content’. These are at best, unclear, and at worst, counterproductive.
While these protections speak to a crucial need to safeguard against over-moderation, as they stand they risk entrenching, rather than challenging, threats in the sphere of political discourse. If platforms are expected to be especially careful about moderation, deprioritisation or removal of political speech, this incentivises broad categories of speech that will be protected: in particular, speech that relates to critique of politicians. Women politicians are particular targets of gendered disinformation campaigns - with harassment and abuse dressed up as political critique. Without clearer guidance on how abusive political speech is to be dealt with, the Bill risks legitimising the under-moderation of political disinformation, with disproportionate harm on women.
However, the lack of attention in the Bill to the specific complexities of gendered disinformation does not mean there is no cause for optimism. There is room for action to be taken on gendered disinformation online: if the understanding and will to act on it is there.
Avenues for redress
The Bill will require a certain level of transparency from platforms that will support the fight against gendered disinformation. Having clear terms of service and decision making processes means that platforms will have to have a position on issues like gendered disinformation, and defend it: consistent enforcement and better reporting tools for users should also mean that those affected by gendered disinformation have clearer avenues for redress if platforms fail to act on violence against them online.
The Bill also makes provision for ‘supercomplaints’ (see Part 5 of the Bill) to the regulator - where platforms are systematically failing to protect a particular group, a representative body will be able to bring a formal complaint to Ofcom, the UK’s communications regulator. Gendered disinformation is an example of a clear harm, which, if there is no action, there would be a compelling supercomplaint to be made on behalf of women affected.
Both of these enable better reactive responses to gendered disinformation. There is, however, some scope for a more proactive approach within the Bill.
Systemic approach
The duty of care approach that the Bill takes means that platforms are not being made liable for individual pieces of content, but are required to design systems and processes that reduce the likelihood of harm. Many of the systems and design choices that are conducive to the sorts of harm that platforms are required to act against - illegal hate speech and extremism, harmful health misinformation, for example - would also reduce the risk of gendered disinformation: such as making improvements to automated curation systems so that authoritative over polarising content is promoted. If platforms are genuinely incentivised to invest in building healthier spaces to reduce the chance of harms in scope occurring, this would reduce the risk of other harms overall.
The Bill still has many months of scrutiny before it comes to pass, and then many codes of practice are still to be drafted to inform what actually changes online, meaning much is still up in the air. The gendered impacts of these changes will need to be central to this conversation, if we are to see any meaningful change for women online. Three key ways this could be achieved are:
- Be specific: name gendered harms which are in scope, and where the limits of protections for expression of political speech will be drawn
- Consult: consultations by the Committee for pre-legislative scrutiny and by OfCom should proactively engage with those affected by gendered disinformation, from across different communities, as well as experts on the technical and social features of gendered disinformation online, to better understand the risks and impact and how the Bill could mitigate these
- Assess: OfCom should require companies to include gendered impacts in their risk assessments for both illegal and legal harms on their services. These should take an intersectional approach in considering how gendered online harms affect different groups of women. These would then provide a basis on which appropriate platform interventions in codes of practice could be developed.
In the broader scheme of things, the Digital Services Act possibly offers a clue to how we could progress towards genuinely building a safer, more empowering internet for everyone. If the UK were to expand its ambition to a more explicitly values- and rights-based internet, rather than only a safe one, this could provide a roadmap for how to navigate the core conflicts within the Bill.
This commentary was first published by Demos.