How not to use tech in a pandemic - lessons from the UK

Commentary

From legal challenges to delays, leaks and policy reversals, the UK has stumbled in its tech response to the pandemic. A full assessment of what happened will take time, but the record thus far already offers lessons for the future.

Boris Johnson chairs the morning Covid-19 Meeting
Teaser Image Caption
Boris Johnson chairs a remote Covid-19 meeting

At the beginning of March, UK Prime Minister Boris Johnson’s chief advisor, Dominic Cummings, invited technology CEOs and business leaders to a Downing Street meeting. About 40 reportedly attended, from large tech companies like Google, Facebook, Amazon, Apple, Microsoft and Palantir, to smaller British companies such as the food-delivery service Deliveroo and Babylon Health. “For the prime minister's startup-loving advisor, it’s big tech versus bad virus”, WIRED UK summed up the encounter.

What is striking about the UK’s early embrace of technology to fight the pandemic, is that it emerged against the backdrop of the government’s early flailing for more conventional solutions. During February and March, the conservative government let by Johnson famously embraced, then later abandoned, the concept of ‘herd immunity’, delaying by weeks the necessary lockdown and likely costing tens of thousands of lives. A lesser-known failure of that initial period was the government’s abandonment in late March of large-scale testing and tracing, practices that are recommended by the WHO.

It was around that time that the UK started heavily ramping up its tech response. By mid-March, much earlier than in many European governments, the UK started developing a contact-tracing app. Shortly after, the Department of Health and Social Care published a blog post called "The power of data in a pandemic", in which the government announced that it had commissioned the development of a data platform to merge all data crucial to tackling the pandemic -- ICU availability, Covid-19 test results and 111 online/call centre data, for example -- in a single database. Little detail was shared at the time, except that the government would work with Microsoft, Palantir Technologies, Amazon Web Services (AWS), the London-based artificial intelligence company Faculty, and Google.

More than three months later, the country’s overall response to the pandemic, as well as its embrace of technology, can only be described as a disaster. By early June, the Financial Times counted 62,000 excess deaths since the outbreak of the pandemic in the UK, making the UK as a whole, and England in particular, the hardest hit of all the G7 major industrialised nations in the weeks leading up to early June.

Meanwhile, the contact tracing and data platform projects are facing significant challenges -- practically, legally and ethically. In a major U-turn, the UK abruptly abandoned its centralised contact-tracing app in favour of a model based on technology provided by Apple and Google. Due to the time lost, a minister said that a launch is not expected before the winter. 

At the same time, privacy campaigners have reported the National Health Service (NHS) Test and Trace scheme to the country’s data-protection authority. The scheme, which includes England’s tech infrastructure needed to conduct manual and web-based contact tracing and swab testing, was launched in May with an incomplete data privacy notice, which also stated that data would be retained for 20 years. And only after openDemocracy and the tech justice firm Foxglove threatened to sue did the government publish the contracts governing the data-sharing agreements with Google, Microsoft, Faculty and Palantir. Documents revealed that the terms of that deal changed after Foxglove’s initial request under the Freedom of Information Act, and many details remain unclear. 

Preventable and entirely predictable delays

The UK isn’t the only country that has stumbled in using technology to track and contain the virus. Other countries changed course on their contact-tracing apps (notably Germany, but also Poland), while other European countries are also facing legal challenges (a Slovakian court, for instance, declared the country’s planned telecommunications data sharing unconstitutional). 

Perhaps because the UK is not alone, a popular narrative is emerging that frames the UK’s apparent failures as either perfectly normal, or the inevitable side effect of innovative uses of technology. In Parliament, Johnson incorrectly suggested that no country has a working contact-tracing app (in fact several do, including Germany). Meanwhile, a commentator for the Evening Standard called the decision to ditch the home-grown contact-tracing app solution a positive example for a public sector that’s “agile, willing to experiment and able to pivot when tests show problems”. 

In reality, what makes the UK’s challenges unique is that they were either preventable, or entirely predictable. Nowhere is this more evident than in the country’s U-turn on its contact-tracing app. Even before the app was tested on the Isle of Wight, experts warned that the NHS’s approach probably would not work as well as expected, especially on iPhones. That is because, in contrast to the decentralised Apple-Google API that the UK has now adopted, the NHS’ original approach required workarounds to function as advertised. Related issues, such as the fact that Apple's Bluetooth design makes it harder to detect iPhones running the app in background mode, were detailed and reproduced on GitHub by volunteers within 24 hours of the NHS app code being made open-source. No trial was required to come to that conclusion. 

Meanwhile an analysis by Privacy International found that the app was incompatible with a range of older Android devices and doesn’t allow people to opt-out of analytics trackers by Google and Microsoft. Security researchers discovered wide-ranging flaws, one of which could allow hackers to intercept notifications, block them or send out bogus warnings. During the Isle of Wight trial, residents encountered numerous problems, including receiving false alerts and drainage on their phone batteries.

A full assessment of what happened will take time, and just like the pandemic itself, the tech response to the coronavirus crisis is still ongoing. However, some early lessons have already become apparent:

2. When trust is the most precious resource, tech cannot be released in beta versions

There is a place and time for government to experiment, but not during a crisis, when public trust is a necessary condition for a technology to work. Voluntary contact-tracing apps will only serve their purpose if a sufficient number of people choose to install and use them. That requires public trust, and governments can’t afford to squander that by releasing software in beta versions and with entirely unnecessary hiccups, such as embedded trackers in the app, security vulnerabilities, and inadequate privacy notices. These avoidable challenges cost more than precious time and money; they also undermine public confidence.

2. Data protection is more than a box-ticking exercise

Data protection is one of many critical ingredients of responsible technology and must be taken seriously. From the absence of data protection impact assessments to sloppy privacy notices that used phrases that aren't even in U.K. law, it appears that data protection was often seen as an afterthought, rather than a guiding principle that shapes how technology is designed and built from the get-go. Principles like transparency; privacy by design and by default; and data minimisation (the idea that data collected and processed should not be held or further used unless this is essential for reasons that were clearly stated in advance) help ensure that any given tool only collects the amount of data that it strictly needs to function. Likewise, data protection impact assessments are not just a legal requirement for risky applications of technology; they also provide a crucial way to assess possible risks before a technology is deployed.

3. Committing to transparency is not the same as truly working in the open

Transparency has been a core talking point in the UK government’s tech response to the pandemic from the beginning. This rhetoric, however, was frequently undermined by the way crucial information was communicated – or not -- in practice, as in the initial refusal to release data-sharing agreements, the leaks of controversial plans about future uses of technology, and the withholding of information from the very committee set up to provide oversight of the contact-tracing app. Truly working in the open means instituting a culture of openness, with clear, regular public communication about projects and the publication of machine-readable data and models. More than limiting speculation, this helps to build trust.

Now is not the time to put people at risk

Not every problem has a technological solution, but technology and data clearly can play an important role in tackling the pandemic and its economic and social consequences. The UK’s tech response highlights the importance of monitoring and ensuring that technology is used responsibly, proportionately and based on evidence that supports its effectiveness. 

Like elsewhere, much public discussion about the use of data and technology to tackle the pandemic in the UK has focused narrowly on a supposed tradeoff between (individual) privacy and public health. This resembles debates about using surveillance to fight (domestic) terrorism. Framed as such, it’s easy to conclude that the means justify the ends. This framing also ignores the fact that especially during a pandemic, when public trust is crucial, responsible technology can be more effective. Rather than asking whether we have to give up privacy, it is much more important to ask whether any restrictions of rights are proportionate, effective, lawful and truly temporary?

These considerations are especially important because, as is so often the case with technology, those who are already marginalised shoulder a disproportionate share of risks and harms. How technology is used in a crisis also matters because that inevitably will set the template for what comes next. We’re still in the middle of a global pandemic and at the beginning of a recession that is likely to widen inequality. If there has ever been a time to ensure that technology does not put people at risk, it is now.