The highly technical and quickly evolving nature of the digital industry requires significant regulatory capacity. It also requires significant global co-operation, as the harms that emerge online often cross borders and are linked to entities that are partly or entirely located outside of the jurisdiction where the harm occurs. This paper analyses both the One-Stop-Shop in the General Data Protection Regulation (often called "UK GDPR") and the Proposed EU AI Act and Standardisation.
Read the full dossier "Digital rights post-Brexit - Regulatory divergence in the UK and the EU"
- Introduction
- Data Protection and the “One-Stop-Shop”
- The EU AI act and co-operation on product certification
- Common challenges: capacity and speed
- Concluding remarks
1. Introduction
The highly technical and quickly evolving nature of the digital industry requires significant regulatory capacity. It also requires significant global co-operation, as the harms that emerge online often cross borders and are linked to entities that are partly or entirely located outside of the jurisdiction where the harm occurs.
In this short paper, we will consider two case studies, the One-Stop-Shop in the General Data Protection Regulation (GDPR) and the Proposed EU AI Act and Standardisation.
We will then consider broader issues of capacity and speed of institutions regulating the digital sphere.
2. Data Protection and the “One-Stop-Shop”
While the UK currently has a domestic law which is similar to the EU’s General Data Protection Regulation (GDPR),[1] often called the UK GDPR,[2] the aspects which relate to regulatory co-operation were removed at the end of the Brexit transition period.
In the EU, the GDPR offers a “one-stop-shop” for businesses to engage with a single regulator even when they work in more than one country, and allows data subjects to raise complaints with the regulator in their own country. For example, if there is a technology company based in France delivering services in Sweden, a Swedish citizen can complain to their own regulator, in Swedish, who will pass the complaint to the French regulator, who will monitor it and liaise with the data subject on the regulator’s behalf. If there are many complaints — as is common with a large data breach — they can be combined into a single action. When the French regulator reports back with their decision, it is subject to scrutiny, amendment and voting by the European Data Protection Board,[3] the group of all EU regulators, because it does not concern solely French individuals, but relates to data processing that crosses borders. Regulators also have rights to ask for assistance and start official joint investigations with their European colleagues.
Results of the One Stop Shop arrangements and Brexit
Following Brexit, the UK regulator, the Information Commissioner’s Office (ICO),[4] is no longer legally bound to the GDPR. This does not mean there is no co-operation with its neighbours: less formal ties and agreements exist through international fora, such as the Global Privacy Assembly[5] and the Commonwealth Common Thread Network.[6] However, these have limits. UK technology firms are forced to either deal with the ICO and every European regulator separately — and face separate, individual fines in the case of a breach of the law – or set up a European branch and designate it as their “main establishment”,[7] which comes with significant costs. For these reasons, technology companies have less incentive to base themselves in the United Kingdom. The largest technology firms typically serve the United Kingdom out of the Republic of Ireland (Meta/Facebook, Twitter, Alphabet/Google), the Netherlands (Netflix BV, Uber BV), or Luxembourg (PayPal, Amazon).[8] The ICO will face challenges in investigating these out-of-jurisdiction firms through, for example, inspecting premises or data centres, as it no longer has a legal right to ask for mutual assistance from European data protection authorities.
Critics of the one-stop-shop model have pointed to the “blockage” of investigations into some of the largest technology firms by the Irish regulator.[9] Data protection authorities and politicians both in Ireland and across Europe have publicly and privately expressed their frustration at the slow and cautious pace of the Irish regulator.[10] The causes for the Irish regulators poor performance include lack of funding and a historic institutional weakness; they are not the only regulator with such problems, but have a disproportionate impact because of the role of the Irish economy as a European data centre for many global tech platforms.
EU regulators have also rejected many of the few decisions made by the Irish regulator that have made it to the European Data Protection Board, and sent them back to the Irish regulator to be reworked. These decisions have concerned companies such as Meta or Twitter, and the referrals have been for both presenting fines that were too small and due to disagreement with the legal findings made by the Irish regulator.
While the ICO lacks formal and assured co-operation with the data protection authority in Ireland, it can now act without having to go through it, giving the UK a chance for regulatory leadership to support UK residents’ data rights.
Some large technology companies still choose to have offices and legal entities in the UK as an important digital market (e.g. for developing technology or for selling advertising), which can be targeted for action or injunctions by the UK’s regulators. However, where they do not have offices in the UK, it may be difficult for the UK’s regulators and individuals to enforce legal requirements.
3. The EU AI act and co-operation on product certification
The EU have proposed an Artificial Intelligence Act,[11] which, among other things, requires that a small subset of AI tools classified as “high-risk systems” – such as those involved in hiring and firing, managing critical infrastructure, dispatching ambulances, or assisting judges and law enforcement – must undergo a process of self-certification to a technical standard that has yet to be prepared. After this occurs, the tools can be (virtually) assigned a CE mark[12] and can move freely across the Union without being regulated further by any Member State. This is to allow the free movement of AI systems within the EU and to avoid regulatory fragmentation whilst upholding minimal standards for these systems, although it has been criticised for not placing enough focus on fundamental rights.[13]
Consequences for the UK
High-risk AI providers based in the UK will have to certify to EU product standards to sell to the EU. This is no different than any other goods that require CE marking, which are familiar to firms of risky manufactured goods, such as toys, machinery, medical devices, construction products or personal protective equipment (PPE).
These goods can currently be sold under CE marking in the UK, although the Government plans to phase this out in 2023 and require producers to apply separate UKCA marking if they wish to sell in the UK.[14]
Institutional Cooperation in Standards
In practice it will likely be the case that the EU rules on high-risk AI will become the global standard for such products. Standards of course can be critical for the kinds of business activities or products companies may provide. As these are increasingly made through through private standards bodies, there are challenges for public accountability and ensuring there is fair representation of those needing or subject to the standards, including the public.
New amendments to the Standardisation Regulation by the European Commission[15] mean that the UK will be excluded from the process. The British Standards Institute (BSI)[16] will no longer have a vote on these AI standards within CEN/CENELEC,[17] the private European standards organisations that the European Commission will ask to draw up high-risk AI standards.
The BSI is not a public body, but national standardisation bodies do play a representative function in highly policy-relevant international negotiations. UK small and medium enterprises (SMEs), for example, rely on influence through the BSI in order to have some say in the international standard-setting scene. This could be through commenting on draft standards, proposing new standards or adopting a voluntary role on a BSI committee.
If the trend to governing digital technologies through private standards increases, the UK will need to consider how to make its standardisation processes more globally impactful to counter the loss of influence in European standardisation.
Mutual Recognition
The UK’s AI regulation plans have not yet been laid out, and are expected in a White Paper later in the year.[18] However, it is worth noting that UK SMEs will be de facto governed by the EU AI Act when producing high-risk AI systems, as the buyers of these systems are often public authorities, and this means there is only a limited market for UK businesses that do sell their models internationally.
The UK may wish to build a regime with the anticipation that they can get mutual recognition of a UKCA and a CE standard of an AI system. This will need alignment with other trade policy, and will largely depend on whether the UK seeks a mutual recognition agreement (an agreements between two trading partners to reduce technical barriers to trade) concerning the assessment of conformity to certain standards covering AI. This may become easier if the EU standard becomes the global standard within standardisation bodies like the International Organization for Standardization (ISO).[19]
4. Common challenges: capacity and speed
All digital regulators in the UK and the EU face challenges of capacity and of speed. Capacity issues relate to technical capacity to scrutinise technology and data; industry know-how that may be needed to predict business trajectories and emerging policy challenges; and analytic capacity to link these issues together into a workable policy strategy. These issues result from a number of different factors. Public sector pay-scales are often too limited to attract or retain experts in this area. This has long been recognised by some regulators, such as the Competition and Market Authority (CMA),[20] which hires experts at significantly higher salaries and is known across the world for the technical quality of its work.
Another capacity challenge comes from the way that civil service is structured in the United Kingdom. It is difficult for specialists to progress within the civil service as experts: they are often incentivised to move “diagonally” to different areas or sectors in order to progress in their careers. Limited pathways exist to retain institutional knowledge in highly specialist areas, such as the digital society. This can increase reliance on expensive external consultants, which may not be good value-for-money and may not deliver the necessary rigour.
Regulatory practices in the UK and elsewhere are also too slow to be effective. Platform companies often adopt tactics that are illegal or borderline–legal, and treat these as a “licence to operate” until they are investigated and potentially fined. Even significant fines expressed as a percentage of annual global revenue, as most UK and EU legislative instruments currently do, may pale in comparison to the multi-year revenue gained from these questionable practices and the market positions they enable. Yet in the existing regulatory framework, appeal processes favour firms over complainants. In UK data protection law, firms can appeal ICO decisions, such as fines, to the tribunal system for a very small cost, whereas complainants, including civil society organisations, can only go to the High Court for substantive review of action by the ICO relating to their complaints at a cost which is prohibitive for all but the richest individuals. Furthermore, firms continue practices deemed illegal in other sectors, by other firms or by higher courts, requiring further regulatory action to get closure.
Technology firms have significant legal and financial resources at their disposal to challenge regulators in the digital space on both procedural and substantive grounds at all stages. This creates huge costs for regulators in simply undertaking a normal course of action, particularly within the common law system, that are not typically seen in faster and cheaper civil law jurisdictions. Regulators with limited budgets have to anticipate significant costs in normal business whenever taking action against such firms. The Irish Data Protection Commission is notably risk-averse in their approach to potential procedural challenges in the expensive Irish courts.[21]
5. Concluding remarks
In summary, UK and EU institutions have lost several important frameworks that previously coupled them together. The loss of these frameworks creates costs and risks for UK businesses and disincentives for investing in the UK, and hinders certain types of regulatory action and co-operation. However, it does create some limited opportunities for UK regulators to act faster than EU regulators on illegal activity, if they choose to do so.
Other UK policy-making institutions, like the private standards sector, are also excluded from the most cutting-edge technology policy debates, and this may continue as a trend if digital policy moves to rely heavily on standards, and the EU moves faster than the UK.
In the UK, lack of capacity and expertise are significant practical challenges for regulators and the civil service. The common law system in the UK and Ireland and the high costs associated with the court system shift significant power to technology firms, which have much deeper pockets than both regulators and civil society organisations.
In some areas, as we note in our paper on content moderation, Regulating Big Tech Platforms,[22] competing regulatory standards or approaches may either conflict or benefit from institutional dialogue. An example of this could be co-operation on developing transparency reporting standards, which could provide mutual benefit; a more difficult area is likely to be competing and conflicting standards on ‘risky’ content and safety measures, as we discuss in our paper on content policy. Nevertheless, the real world does intervene, and regulators are frequently dealing with the same companies. While the UK may be at a disadvantage as a standard setter due to its size and the institutional issues explored in this paper, the benefits of dialogue and practical co-ordination remain for both sides, and should be fostered.
Acknowledgments: We would like to thank Dr Michael Veale for his significant contribution to this briefing.
[8] See e.g. https://www.politico.eu/article/luxembourg-data-watchdog-big-penalties-not-the-aim-amazon-paypal/
[10] See e.g. https://www.politico.eu/article/ireland-general-data-protection-regulation-gdpr-criticism-privacy-didier-reynders-european-commission/
[13]See e.g. Michael Veale and Frederik Zuiderveen Borgesius, ‘Demystifying the Draft EU Artificial Intelligence Act’ (2021) 22 Computer Law Review International 97. doi.org/gns2s9
[15] https://single-market-economy.ec.europa.eu/single-market/european-standards/standardisation-policy/general-framework-european-standardisation-policy_en
[18] https://www.gov.uk/government/publications/establishing-a-pro-innovation-approach-to-regulating-ai
[21] See e.g. noyb. (2021) Irish DPC “handles” 99.3% of GDPR complaints, without decision? Available at: https://noyb.eu/en/irish-dpc-handles-9993-gdpr-complaints-without-decision. (Accessed 22.08.2022)
[22] INSERT LINK ONCE FINALISED