Inside your pocket: the grave threat of disinformation on private messenger apps


WhatsApp is huge in Spain. It is on almost every phone and Spaniards spend more time on this platform than in any other digital space. Telegram is also growing massively, yet the fundamental role these private messaging apps play in the dissemination of disinformation has often been overlooked. It is there where most disinformation originates and, at the same time, where it is most difficult to monitor. It is essential that our conversations on WhatsApp remain absolutely confidential, but that does not preclude ways of addressing falsehoods on these platforms as well.



This commentary is part of our dossier "Drowning in disinformation", which explores how homegrown state-sponsored disinformation threatens EU democracy.


One of the major battles for the preservation of our democratic liberties is being fought right now inside our pockets. Democracy can only be as good as the information voters have when they make decisions, and there is a thriving sector dedicated to misleading, manipulating and polarizing them. It is happening as we speak, on the screens of our cell phones, particularly on the single piece of technology we use most: private messaging services.

Despite all the attention given to how social networks are used to spread disinformation, private messaging is a much more complicated area. The first reason is size alone: Twitter has 7 million users in Germany, while WhatsApp has almost 60 million. The second reason relates to the nature of private messaging itself: none of us want our messages to be read, and rightfully so, but that has consequences. When lies are told in family chats, there are no fact-checkers and no public scrutiny. That is, if we do not find a way to solve this problem.

This is a problem that we have thought about long and hard in As the premier Spanish independent fact-checking organization, we realized long ago that most of the disinformation we debunk starts in private messaging apps, before overflowing to open platforms such as YouTube, Facebook or any of the others. And more importantly, we already knew that the sooner a hoax is debunked, the smaller the scope of its dissemination is. Aware of these two things, what we needed was a window to observe what was happening inside private messaging platforms. But how?

Getting in on the inside

WhatsApp is huge in Spain. Spaniards spend over a hundred minutes every day on the app, longer than anywhere else online. Its group chats are where much of the social conversation happens, where families and friends make plans, but also where they get their news. Over a third of the country uses it to “find, read, share and comment”. For us as fact-checkers, not being there was a non-starter. But in the beginning, “being there” was not that easy.

We knew WhatsApp encrypted messages end-to-end, so our only way to know what was going on there was through the goodwill of our community. People had to actively send us the content that they were receiving and that they found “suspicious”, which would be our way in. We started as anyone would: getting a cell phone, installing WhatsApp on it and letting people know they could contact us there. And we were very encouraged by their response.

Thousands of people offered us a way into a space that until then was shrouded in darkness. Now we were able to see inside, to see which contents were most viral, and sometimes even debunk them before they reached public platforms. This not only let us detect disinformation, but also helped us immensely in disseminating quality information. It was kind of a quid pro quo: you send us this thing somebody sent you, we investigate it, and if we debunk it you need to share it with that person, in that same group. That is the only way to make a difference. And they bought into it.

We had found our way in. We were finally inside Ground Zero of disinformation and we were thrilled. But also a little worried. Among the many useful insights, we also got a lot of noise on our WhatsApp number. Our team had to sort through a lot of messages daily, many of them hideous, and manually annotate how many times something was coming up, besides replying to users when they asked about a particular thing that we had already debunked. It was a lot of work, and much of it was mechanical. Then came COVID-19 to change everything.

Tech on our side

If before the pandemic we had maybe a hundred WhatsApp messages on any given day; in March 2020 we easily got 2,000. It was, in some ways, good news: people needed us, people wanted to know more and people cared. However, it was soon clear that our team could not handle everything manually. Even if we doubled the size of the team, it was still not possible. At that point, our window into this critical area to monitor disinformation was closing because we could not process all the data we received, much less reply to all the wonderful people in our community that were generously letting us see their public conversations through WhatsApp. We needed a change.

It was then that we decided to create a bot. Surely there was a way for technology to do some of the work we were doing classifying the input we received, right? We started researching and quickly found that this was the case. Our engineering team got to work and received some external help, including from WhatsApp itself, in creating a bot that could tell us how many times particular content had been sent to us, could transcribe viral voice messages and could also respond to users’ input in real time with “here you have information on what you’re asking about” or “I’m sending this to my human colleagues so they can investigate it and get you an answer”.

It was a win-win for everyone involved. For our community, it means they get answers for their questions much quicker than before. For us, we get more information, automatically classified, and improve our assessment of the virality or see how it evolves over time. We also liberate our journalists from filling in spreadsheets, so they can devote their time to what they do best and computers cannot: investigating, finding answers and producing quality content to debunk falsehoods. We are proud of it – and it does not hurt that it resulted in us winning the European Press Prize for innovation in 2021. You can find more about the project in this study.

This is a good example of how technology can help fight disinformation effectively. Many people keep looking for a magical solution, such as an algorithm that is able to recognize disinformation and filter it out of our cell phones and computers. It is futile. Disinformation is a very nuanced, human concept. Algorithms are not yet able to identify humour, sarcasm, human mistakes or pure opinions - and in matters pertaining to freedom of expression, it is always better to err on the side of caution. However, there are technologies that are ready right now and could make a difference in fighting disinformation where it matters most: on private messaging platforms.

I have already talked about our community-based approach to disinformation detection and debunking dissemination, but there is more. Without ever violating the privacy of our private messages, there are simple solutions that messaging platforms can apply: WhatsApp, for example, has been experimenting with a button that simply allows users to google a message to see if a fact-checker has already debunked it. It is easy, it is respectful and it does not prevent anyone from expressing their opinions.

Sadly, the other big-league private messaging company in the world has decided to do next to nothing against disinformation. Telegram is already installed on 1 billion devices around the world. Well-known figures in the disinformation industry that have been expelled from every other platform are currently cultivating big audiences and fundraising off Telegram, where they have public channels open to everyone, and where it is common to denigrate Holocaust victims, promote unscientific narratives on vaccines or harass medical professionals, and to publicize every kind of conspiracy theory.

The disinformation we see in private messaging

After a few years of watching disinformation in private messaging, we know that there is certain content that keeps coming back. There is lots of disinformation about politics, such as false quotes, manipulated videos and so on, but also certain political, although not explicitly partisan, issues that generate a lot of falsehoods. The best examples would be inherently racist narratives about migrants and people of colour (false information about supposedly unfair welfare benefits, blaming them for crimes they did not commit or inventing them outright, etc.), but also those attacking gender equality (fake quotes and statistics, impersonation, etc.).

Beyond these ideological-political narratives, in the last year we have witnessed a tremendous increase in hoaxes that are actually scams: phishing, fake official communications asking for money, impersonation of delivery services, and so on. This kind of disinformation has the potential to reach a much wider audience than those interested in politics, and as such it deserves much more attention from fact-checkers, platforms, authorities and the general public.

Talking about attention: after the pandemic, the US Capitol siege and so many other things in the last few years, it is hard not to be aware of the dangers of disinformation, particularly that which we mostly do not see because it is moving around private messaging. Sometimes this crisis is framed as a problem that only affects a tiny fraction of the population, such as those more concerned with politics. It is not. At other times, lawmakers tend to see it just as a manifestation of foreign interference in our political process. It is much more than that.

There is no easy way out of this crisis, no decisive victory in sight. All we can do is keep fighting, advancing an inch every day. Currently, we are not even doing that. We need to mobilize people to help us fight disinformation. We need technology to help us focus our efforts where they can really make a difference. We need quality research to know more about the phenomenon and make better decisions. We need platforms to do more and governments to increase the scope of their understanding of disinformation. And we have little time to waste.