The EU’s Digital Policy

Conversation

How did the coronavirus impact digital rights, the introduction of contact tracing apps, as well as the daily use of digital tools and artificial intelligence. What does this mean for current and future policy?

Digital Policy

MEP Alexandra Geese and EDRI’s Diego Naranjo discuss contact tracing apps, the daily use of digital tools and artificial intelligence with Zora Siebert, Head of EU Policy Programme, Heinrich-Böll-Stiftung European Union.

Zora: Thank you very much for joining this interview on the EU’s Digital Policy. The first topic I would like to address is policy-making and the coronavirus. The coronavirus pandemic hit us very quickly and on a large scale, and has changed our lives completely in the last months. What has changed in your work and, since we’re talking about EU Digital Policy, what do you think about the impact of the coronavirus on digital rights?

Alexandra Geese: We are all more aware than before of the importance of digital tools, and therefore of how important digital policy is. A big change is that we now use video conferencing more, which allows us to keep working on policies. We are also aware of the security issue, of how important it is to have a European infrastructure for this. Digital policy really is now at the centre of thinking.

In practical terms, formal politics is working astonishingly well. It is possible to have committee meetings via video, though legislation is a little bit delayed, but mostly because people work from home and schools and childcare facilities are closed. So it’s not because of the lack of physical meetings that policy-making is delayed. 

I do miss person-to-person interaction, but it’s been amazing to see how many people we can reach with video conferences and webinars. We’re doing these German-Italian webinars that are absolutely great, there are 1,000 to 2,000 people online – that would have been impossible to organise without people being ready to participate in a digital meeting. I think that’s very positive, and it opens up interesting opportunities for European politics. 

What I really miss is to be in touch with people, who don’t use technology or otherwise don’t attend digital meetings. That’s worrying because it enhances the feeling of living and working in a bubble. This is a great danger at the moment.

Alexandra Geese

Alexandra Geese

Alexandra Geese is a Member of the European Parliament with the Greens/EFA group. She has been elected in 2019 and now serves as a full member of the Committees for Internal Market and Consumer Protection and the Budget committee. Her political focus is on digital policy, fundamental rights, as well as gender budgeting.

We’ve also followed the discussions on tracing apps and there is a threat here in terms of surveillance. People are ready to give away a lot more of their privacy if they are promised security, safety and good health, so that’s also a danger. But in some countries in the EU we have had a victory with the choice of decentralised storage for tracing apps. The debate around this was very good and fruitful; it gives me hope. 

Diego, do you want to jump in?

Diego Naranjo: At EDRi we thought that when the lockdown started we would have more time to do research or reports, but that wasn’t the case. We are working more than before. The use of technology during the pandemic has put a lot of weight on our shoulders. We were very quick to react with a statement in March: if you want to use technology to prevent the expansion of the pandemic please follow requirements ABCD, and were glad that the European Commission has taken the right approach and include our recommendations in their toolbox and guidelines; we also follow the European Data Protection Board and the European Data Protection Supervisor on the implementation of these guidelines. 

We have a good basis now with what the European Commission has said, and some of our members have accessed a guide on dos and don’ts, so that’s good. The problem that we saw from the beginning, however, is that, as with any kind of surveillance, we run the risk of normalisation. By implementing surveillance for an emergency such as the pandemic we will have that as the normal in two years’ time, and we need to avoid that this technology goes beyond what is needed. More positively, what is better than before the pandemic is that we are actually able to talk to Commissioners now. Before the crisis they had many external meetings and had limited time for us. Now some of them found time to hear our opinion on the impact of the coronavirus. 

It’s good to hear that you can all keep on working in a proactive way, and even though the workload has increased for all of us with all the digital meetings, I’m glad to hear that the law-making process keeps going. One of the most important upcoming laws for this term is the Digital Services Act (DSA). With the DSA the European Commission would like to update the liability regime for all digital services. This new law is also supposed to tackle questions around content moderation. Diego, why does the Digital Services Act matter to consumers?

Diego Naranjo: We see the DSA, the reform of the e-commerce directive, as a good opportunity for everyone to really shape the internet we want to see in the next years. We have defended the e-commerce directive quite strongly the past. On the other hand, we see that those platforms who might have been neutral ten or twenty years ago, are actually not neutral. They create a power imbalance between citizens and those that own those companies. The business model is linked to a lot of other problems: misinformation, hate speech and illegal content etc., which need to be tackled. We believe there should be an update of the rules on how the content is moderated.

We also want to tackle the problem that we have in advertising technology. That’s going to be a different strand which we have not worked on publicly. We are putting together a position paper on the issue in general. Some of our members have worked on this quite intensively, Panoptykon Foundation or Open Rights Group for example. We needed to ask for a deep reform of the micro-targeting business model. That’s why the DSA is important for everyone: either we control how the technology works, or the technology will control how we work. 

Diego Naranjo

Diego Naranjo

Diego joined EDRi in October 2014 where he works as Head of Policy. He leads the policy team work for the protection of citizens' fundamental rights and freedoms online in the fields of data protection, surveillance and copyright and other dossiers. In the past, Diego gained experience in the International Criminal Tribunal for former Yugoslavia, the EU Fundamental Rights Agency (FRA) and the Free Software Foundation Europe. Previously to all that he worked as a lawyer in Spain. He was part of the expert group on digital rights of the Spanish Ministry of Energy, Tourism and Digital Agenda between 2017 and 2018.

Alexandra, would you like to add something to that? The underlying problem of the reform is also that online platforms are just so big. How can we tackle this?

Alexandra Geese: I’d like to pick up on what Diego said, especially on the ad-tech issue. There is a huge problem with the ad-tech industry and that it is the root of many of the imbalances we are experiencing with online platforms. But nobody was really able to tell us what to do about it. For lawmakers this is a really difficult situation to be in, so I’d be curious to see what EDRi comes up with.

What I’ve been stressing, as I’m the shadow-rapporteur on the DSA Report, is transparency of decision-making and of algorithms. What was lacking from the report was transparency of how the algorithms work. This is what we absolutely need, maybe together with a ban of ad-tech. We need to know how a company chooses some content over other content.

What is also important in the European discussion is the distinction between social networks and online market places. Many people in EU businesses and consumer associations are really insisting on having stricter rules for products being sold on the internet. There are so many products that do not comply with our safety and health standards in Europe sold on the internet, and they are a lot cheaper than European-made products which have to comply with our laws. If there is no liability and no obligation of knowing who the company is that offers those products, it’s very difficult to combat.

I talked about algorithm transparency, which is important for freedom of expression because there are many people who are so targeted by hate speech that they don’t have freedom of expression on the internet. A lot of people tell me ‘well, you know, I don’t have a Twitter account, I’m not on Facebook, because I get so much hate. I don’t go on talk shows anymore because the day after I get letters with pictures of my kids in front of their school.’ These people are completely excluded from the public debate and those are the voices we need to hear. 

We have the choice to trust companies to self-regulate or to trust governments to make rules and enforce them. In Europe, we have Poland and Hungary, where Viktor Orbán is attempting to decide what’s on the internet, people might prefer Google or Facebook to decide. What should we do? 

My idea would be to have what I call ‘social media councils’ without the companies being represented. They would consist of advocacy groups, experts from different fields, etc. – all the people who are affected by internet and freedom of speech. They should be the ones to look into the practices of companies; they should look into companies’ transparency reports. I think the added value of a social media council composed this way would allow for a very transparent public debate. 

We have these huge companies who have more power than governments. So we have to figure out where normal citizens come in. The citizens’ council that had been implemented in Ireland, for example, have had good results. We should try to develop an idea like that for social networks as well. This is something I’ve been pushing for and I would be interested in getting it into a public debate and talking about it. 

Thank you Alexandra. It’s great that you have so many ideas and strategies already. Just as a context for our readers, you were referring to an own-initiative report in the European parliament, right, because there is no official proposal by the commission yet? You were also referring to transparency in algorithmic decision-making which is a good transition to our third and last topic: Artificial Intelligence (AI). The EU would like to draw up rules for the use of artificial intelligence. Diego, what do you think are the most important things that should be in such a legislative proposal? 

Diego Naranjo: We’ve been having a discussion within the EDRi around AI that it should be human-centric and not innovation-centric. When it comes to personal data, we are very clear that we don’t even want to talk about reopening the general data protection regulation. Keep it as it is! 

There is room for additional legislation and that can include algorithmic transparencies like Alexandra was mentioning. There is the risk of discrimination, and part of that discussion is about biometrics. In the context of the AI discussion, biometrics is a sub-issue, but for us it’s one of our main campaigns this year. We are asking for a ban on the use of biometrics for mass surveillance in publicly accessible spaces. 

We are concerned when AI is used in specific areas, such as public services, that there be democratic oversight, transparency and evidence to support and justify the need or purpose. We are talking about, for example, predictive policing, or the use of AI technology by judges to choose sentencing. That needs to be very clearly discussed before even implemented. 

Thank you Diego. Alexandra, would you like to add something? Key suggestions for the new AI framework? I know you have a lot of ideas.

Alexandra Geese: I do, yes, since I am the rapporteur for the own-initiative report of my committee. I totally agree with what Diego said. We put very clear obligations to combat bias in our report. I was positively surprised to see that all political groups in the parliament seemed to be going along with that.

As Diego pointed out, AI is based on huge amounts of data and the way data is collected is not gender-neutral, it’s not race-neutral. It’s not an easy problem to solve because you can’t just change the data. We have to be aware of this and we need to think of a way to deal with it.

We suggested a risk-based approach for the regulation of AI. This is not the same approach as the one from the commission that distinguished between low- and high-level risk. The commission’s approach is not nuanced enough. The German Data Ethics Commission made a lot more nuanced proposal, with different levels of risk. This allows us to not hamper innovation where it’s useful.

You have the high-risk category for things that should be banned, and another category for those that need ex-ante assessments. You also have some space in the middle for things that you don’t know if they entail risk or not. You can start using them, but you should have ways of redressing in case there are any suspicions. This is where civil society and participation comes in, because we are suggesting that everyday citizens have redress mechanisms to address this and can ask a national authority in charge of oversight to look into the documentation of that AI tool or to test that AI tool. 

AI can sound like something in the future, where you need to be a technical expert and have to have studied IT for ten years in order to operate it, but it’s not. It’s about basic rules, it’s about basic fundamental rights in the society. You need to try to explain it in a way that everybody can understand, and we need to reach out to communities that are most affected by it and are most at risk – it’s important to involve them in the whole debate.

Thank you very much to both of you, this was a great exchange, I’ve learned a lot. 

 

The conversation took place on 29 May 2020