Creating a coherent strategy for digital policy: Tensions and overlaps in emerging regulatory initiatives in the digital space

Policy paper

This paper outlines the main types of emerging digital regulation in the UK and the EU, characterises some of their interactions, and comments upon capacities needed for coherent strategies for digital policy.

digital-identity-uk_web.png

Read the full dossier "Digital rights post-Brexit - Regulatory divergence in the UK and the EU"

 

  1. Introduction
  2. Emerging and current initiatives
  3. Tensions and overlap
  4. Governing the overlaps
  5. Conclusion

1. Introduction

Globally, there is a significant range of regulatory initiatives emerging in the digital space, covering different areas, particularly data protection and AI, content policy and competition. While the issues they deal with may seem distinct at first glance, in practice they are highly interwoven.

The need to make such initiatives coherent and rigorous in their collective approaches cannot be underestimated. This paper will outline the main types of emerging digital regulation in the UK and the EU, characterise some of their interactions, and comment upon capacities needed for coherent strategies for digital policy.

The scope of action for regulators is limited by the choices that legislators make. Likewise, the problems facing EU and UK regulators are similar, even if their goals or the tools at their disposal differ. Therefore, regulatory co-operation is likely necessary for regulators who want to reach similar outcomes.

2. Emerging and current initiatives

Emerging and contemporary digital regulation includes:

Content-related initiatives, including legislative proposals such as the UK Online Safety Bill[1] and the EU Digital Services Act,[2] and private governance such as the Global Internet Forum to Counter Terrorism[3] (concerning databases of terrorist content), the Internet Watch Foundation[4] (concerning child sexual abuse material) and the Facebook Oversight Board[5] (concerning private challenges to the implementation of terms of service). These primarily focus on the type of content and activities allowed on platforms, the rights of users and what legal or contractual obligations exist for companies concerning such content.

Data-related initiatives, including the regulation of personal data under data protection, privacy and surveillance law (such as the Data Protection Act 2018,[6] the UK GDPR[7] and the draft Data Protection and Digital Information Bill,[8] the Privacy and Electronic Commerce Regulations,[9]  and the Investigatory Powers Act)[10] as well as the regulation of non-personal data and its portability and openness to other actors  (such as the EU’s Free Flow of Non Personal Data Regulation,[11] Data Act,[12] and aspects of its Digital Markets Act.)[13]

Competition-related initiatives, such as ex ante provisions preventing anti-competitive behaviour by platforms with strategic gatekeeping positions, such as the UK’s Digital Markets Unit, sectoral initiatives promoting openness of interoperability such as open banking (through the UK Competition and Market Authority’s (CMA) 2017 retail banking remedies[14] and the 2015 EU Payment Services Directive 2)[15] or data sharing in connected devices (in the draft EU Data Act).

3. Tensions and overlap

The different types of interventions to regulate digital platforms overlap.

Content-related regulation is not just about analysing a single post in a vacuum, but can involve AI systems that profile users and their behaviour over time, particularly given longitudinal trends in areas like grooming or radicalisation. Because these systems involve the monitoring of all individuals, challenges of privacy and proportionality arise from their deployment. Content regulation also interacts with surveillance law, as when private firms install the capabilities to monitor and analyse posts, it becomes easier for states they operate in to mandate that such platforms scan or monitor in  ways that may be undemocratic or have limited oversight. Content regulation also interacts with competition law, as regulation such as the Online Safety Bill relies on large firms, with deep pockets, to engage in both human moderation and the development of technologies for moderation.

Relying on deep-pocketed Internet platforms to undertake global content moderation to meet political goals, such as regulation of harmful behaviours  (including behaviour online which may hurt a person physically or emotionally), makes it difficult to tackle the power of these platforms. Decentralised or more competitive markets are harder to create when regulation relies on powerful, centralised and rich intermediaries. Nevertheless, it would be a mistake to accept such an outcome and not to advocate for a more competitive digital marketplace.

Content policy gains a great deal of political attention, despite dealing only with the symptoms of the problem of platform power rather than addressing root causes. Yet it also cannot easily deal with the systemic nature of content and behavioural issues, as it will always seek to deal with the worst individual cases that meet the threshold of app breach of the law, or breach of terms and conditions. Most unpleasant or unwanted behaviour may approach, but will not reach such thresholds, but may remain incentivised and prioritised through the ‘attention market’ discussed in the competition, data protection and content papers. Thus the real solutions to content policy may lie elsewhere, particularly in competition and data protection interventions, to reduce the exploitative markets producing the problems, and which profit from treating human attention as a commodity.

Data-related initiatives cannot escape wider interactions either. Comments and audiovisual materials are often simultaneously personal data, which may be subject to a range of privacy or autonomy interests, such as private opinions, activities or personal characteristics, while also being subject to a range of content regimes like defamation or harassment.

Large technology firms are using the troves of relatively unstructured data that they collect to train powerful text and image analysis and generation models, such as Google’s Pathway Language Model (PaLM), which may create competition concerns and exclude smaller AI firms from the market who do not have access to the same amounts of data.

Over a decade of unenforced privacy regulation has enabled a market for online tracking, much of which is in breach of data-related laws. These firms claim that enforcing the law now would lead to a monopolistic situation in the hands of a few adtech (or advertising technology) intermediaries such as Google and Meta. 

Competition-related initiatives are thought by some to threaten content moderation online, insofar as they may make it less lucrative to run a platform. They often involve opening up datasets and personal data flows to competitors, sometimes on fair, reasonable and non-discriminatory (FRAND) terms, but this comes with privacy risks.

Similarly, enforcement may require significant powers of examination by regulators of sensitive data, and carried out across the world, this could lead to access to content that would otherwise be protected from the state.

Most importantly, digital competition initiatives like Open Banking rely on the transfer of personal information, which requires high customer trust, and thus the support of wider data protection laws.  Competition policy  thus relies on strong data protection to be effective, if these tools are used, in order to mitigate the risks.

As we argue in our paper ‘Strengthening competition policy for effective regulation of digital platforms,[16] the application of competition policy which places the consumer experience at the heart of the market may improve the regulation of content through consumer pressure. For instance, markets driven by consumers with choice may choose better moderation and better content prioritisation than that which suits an ad-driven attention market.

Companies frequently claim that pro-competitive obligations, such as interoperable messengers or allowing competitor app stores, would undermine security and privacy that are designed into products. Some commentators dispute this, claiming that while they may necessitate a redesign with a different approach, choices which support privacy, security and interoperability are possible — although they may require firms to rethink the way they operate and may challenge their sources of power for those firms, such as acting as sole gatekeeper to their user base.

4. Governing the overlaps

Regulators across these areas are already recognising the need to establish working relationships with each other. Memoranda of understanding exist between the UK Information Commissioner’s Office (ICO), the CMA and Ofcom in the UK, and the non-statutory Digital Regulation Cooperation Forum[17] has brought these together further to coordinate their work and ensure aspects of digital policy are effectively covered and initiatives affecting the same area of the economy from different regulators do not conflict. In the EU, similar relationships exist, through, the Digital Clearinghouse[18] launched by the European Data Protection Supervisor.

Successfully governing the overlaps requires more than meeting spaces, however. One of the largest challenges is that the framing and evidence-base about what these tensions look like in practice are shaped not by regulators but by technology platforms. Because of the way the Internet and computing technologies have been so roundly shaped by powerful players over the last two decades, it is difficult for regulators to imagine the whole space of trade-offs that may exist, and ways to defeat these trade-offs and get “the best of both worlds”.

Regulators have long been reticent to engage in the design of technologies and technological landscapes themselves.

To regulate as a word implies actions like a thermostat: increasing or decreasing, but letting the forces being regulated determine much of the direction and result. This may not be enough in the current situation, as unlike in many areas of regulation, regulators do not always have a clear political idea of what “good” looks like, or what options may actually be available. Statements about what is socially or technologically possible by commercial actors may not be wholly genuine, and technology firms have long changed their practices under pressure in ways they pledged they could not or would not do, such as increasing moderation or showing users their history of tracking.

Regulators may need new kinds of capacities to prepare for the future. This might involve funding research projects or small organisations with different visions of the distribution of power and responsibilities online, or employing experts in horizon scanning and identifying trends through qualitative and quantitative research. Such capacities need to be used when engaging with regulated entities around the above type of trade-offs, in order to challenge assumptions  and be able to engage with these entities on a more level playing field.

5. Conclusion

Establishing a clear vision for our digital economy relies on government and legislators rather than regulators.

Ensuring government departments and policymakers can develop a vision for our digital economy based on a deep understanding of the issues may be challenging, but appears necessary given the complexity of the interactions and the social importance of the outcomes. This requires resources for political actors, action from civil society and funding for research at academic standards. Similarly, regulators may need new kinds of capacities, such as funding for research projects or employing experts in horizon scanning, in order to have a more informed evidence-base from which to engage regulated entities.

There are some clear choices for the UK and EU. As we explore in our paper, Strengthening Competition Policy for Effective Regulation of Digital Platforms, competition policy appears to be under-explored as a tool for improving digital policy outcomes. There is a danger of over-playing the risks of competition interventions and entrenching digital monopolies through over-reliance on regulation to deliver outcomes for content policy in particular.

Similarly, there are risks that data protection is neglected as a fundamental tool of developing fair markets, thus endangering competition policy outcomes, especially for the UK as it seeks to deregulate data protection.[19] If, however, the three areas are seen as potentially complementary, there are productive ways forward, not least because strong competition and data protection policy can reduce the issues of problematic content and behaviour by reducing or removing the dynamics which currently incentivise their production.

 

Acknowledgments: We would like to thank Dr Michael Veale for his significant contribution to this briefing.