Do no harm? How the case of Afghanistan sheds light on the dark practice of biometric intervention

Commentary

In August 2021, as US military forces exited Afghanistan, the Taliban seized facial recognition systems, highlighting just how a failure to protect people’s privacy can tangibly threaten their physical safety and human rights. Far from being good tools which fell into the wrong hands, the very existence of these systems is part of broader structures of data extraction and exploitation spanning continents and centuries, with a history wrapped up in imperialism, colonialism and control.

dayne-topkin-p33fSHNj-_c-unsplash.png
Teaser Image Caption
Biometrics - when human beings become data points.

The recourse to digital technologies has become common in humanitarian intervention. Managing the distribution of food, shelter and people, in the midst of both environmental and human-made crises and instability, is a complex challenge to say the least. So it’s no surprise that the search for robust solutions that can withstand intractable conditions often leads to the glittering promises of emerging technologies.

The growing hype around biometric “solutions” is one such example. ‘Biometics’ refers to data about our faces, bodies or behaviours, which have undergone specific technical processing in order to be readable by a machine. This could be, for example, our face in a photo being turned into a face template in a database.

Biometric systems are those using facial recognition, fingerprinting or other biometric data analysis to identify people, categorise them, or even judge their emotions. All three types have received strong criticism from human rights groups around the world. Yet biometric systems are common in the humanitarian space: of the United Nation’s nine humanitarian agencies, at least five have rolled out large biometrics programmes.

Frequently, there is a serious lack of care and attention to the fact that using biometric and other digital systems to track, record or identify people in at-risk or vulnerable situations comes with big risks to their privacy, security and other human rights. In just one of many concerning examples, the UN’s Refugee programme came under fire from one of their own Special Rapporteurs, E. Tendayi Achiume, for projects which coercively made access to food contingent on biometric identification, based on the weak justification of preventing fraud. This report further describes how a faulty biometric system led to the denial of food for Rohingya refugees who had fled massacres – an unthinkable situation for people who had experienced unspeakable horrors.

In recent years, organisations like Oxfam and the International Committee of the Red Cross have engaged more with the risks associated with the use of data and technology, and the need for caution. However, the responsible and ethical use of technology in the humanitarian sector is not where it needs to be. There is a systemic lack of rigorous questioning about whether certain technological approaches are necessary in the first place, given the severe harms that they can bring. The problem is also structural: aid organisations that rely on donors have pointed to the fact that their funding is contingent on demonstrating that they’re addressing fraud, making them reliant on the quick fix offered by using biometric identification systems.

The US military’s use of biometrics in Afghanistan

In the military context, these biometric “solutions” pose arguably even greater risks. There is a long history of state actors around the world centralising information as a way to more efficiently monitor populations and keep them in check. If knowledge is power, then databases of biometric data that can permanently identify people are omnipotent treasure troves.

We have seen this in the US military’s spectacular betrayal of the Afghan population. In August 2021, The Intercept reported that, as part of their rushed departure from Afghanistan, the US’s mass biometric data-gathering tools and systems – allegedly intended to centralise biometric and other forms of data of 80% of the Afghan population - had been seized by the Taliban. Understandably, many people rushed to condemn the “bad hands” into which this technology had fallen. Stories abounded about how the systems have enabled the Taliban to locate and murder Afghans that had collaborated with US or allied forces.

Yet the blame for these harms also extends far further back in the series of events than just the Taliban’s abhorrent actions. American forces were collecting and centralising data that they knew was highly sensitive and risky, about large swathes of the population, on the basis of increasing “efficiency”.

Given the ongoing political instability in the region, even the most rudimentary impact assessment would have revealed the enormous and easily-foreseeable risks that such a system would pose. At best, this programme was wilfully negligent in its disregard for human rights. It poses a cautionary tale not just for governments, but also for humanitarian organisations and private companies considering using biometric systems.

A legacy of control and exploitation

The first modern biometric identification scheme arose from a collaboration in 1888 between William James Herschel, a British colonialist in India, and Frances Galton, the founder of eugenics. From these unsavoury roots, fingerprinting grew into a fully-fledged centralised biometric identity programme, enabling the British colonisers to permantly surveil, track and classify the population of India.

Although this particular system was overturned, in part due to opposition from Mahatma Ghandi, today we see its resurgence in contemporary India’s notorious Aadhar system of biometric identification, which has already been shown to deny people welfare and healthcare, discriminate against trans people and transform individuals into commodities rather than humans with agency and autonomy.

There are similarly chilling stories from apartheid South Africa, Nazi Germany and genocide-era Rwanda, where predictions based on people’s biometric features (their skin colour, or the shape and size of their facial features) were used as the basis for segregation, violent attacks, murder and genocide.

Across these examples, people had no choice but to comply with the extreme violation of their human rights as part of wider imperialist regimes. Furthermore, their access to public society and services was frequently made contingent on their compliance.

Moreover, each of these systems used people as a means to somebody else’s end. And these ends - such as stratifying populations by race or ethnicity in order to discriminate against them - force the people that are subject to biometric systems to be complicit in their own transformation from human beings into data points. Each of these systems was underpinned by a desire to subjugate some people over others, to coerce them, and to extract their identity in order to control them.

Oranges, apples and exploitation

One of the more bizarre facts about the US military’s collection of data in Afghanistan was that they were keeping records such as people’s favourite fruits and vegetables. Whilst this may seem like odd data for the military to have tracked along with scans of people’s faces and fingerprints, government surveillance and commercial surveillance (sometimes referred to as surveillance capitalism) rely on making decisions or predictions about people on the basis of huge amounts of prima facie irrelevant data.

In China, for example, the notorious ‘social credit score’ reportedly uses everything from whether people buy alcohol, through to whether they have interacted with someone with a lower score, in order to decide whether they can board a bus, get a university place or travel abroad. This is propped up by mass facial recognition, and is part of the same state surveillance apparatus that has enabled the persecution of people from the Uighur minority. People’s entire digital identities, gathered from biometric databases, social media and more, are used against them.

This same ideology underpins the vast collection of data by corporations like Google, Amazon and Facebook. That is to say, the more you know about a person, the easier it is to track, manipulate and ultimately control them. In the last decade, companies – often in partnership with governments - have expanded from collecting information about browsing and search histories, to combining this already potentially sensitive data with biometric data like the rhythm with which we type, the sound of our voices and our apparent mood; minute details that together comprise, but at the same time do not define, who we are.

The devil is in the data?

In a world where might is right, biometric systems and databases can be yet one more tool of oppression. Aid agencies and governments have been far too quick to embrace biometric ‘solutions’ that can do much more harm than good. The case of Afghanistan may, therefore, contribute to an existential reckoning for aid agencies and lawmakers around the world about data protection, and the imperial roots of biometrics.

In the EU, negotiations have started on key pieces of digital legislation like the Artificial Intelligence Act. One of the fiercest battlegrounds in the Act is expected to be about the use of biometric surveillance systems.  Digital rights groups have long been calling to ban such practices, pointing to rights-violating uses in most European countries. Yet Home Affairs Ministers from across the EU are pushing for fewer controls. The Act’s attempts to regulate the use of biometric data in humanitarian contexts is even weaker, with international agreements and exports falling largely out of scope.

It is critically important that people’s digital rights are put front and centre in upcoming laws. The EU has a duty to prevent abuses of people’s data and privacy not just at home, but also wherever the actions of companies or authorities in the EU may cause harm to people at the bloc’s borders and in other parts of the world.

The cost of these biometric interventions is far too high. We must protect people’s sensitive and personal data like their lives depend on it – because as we have seen, they literally may.

 

Views expressed in this article are solely the author’s and do not reflect the opinions and beliefs of the author's employer.