Underscored by the algorithm: AI's impacts on labour and environment Commentary What are the impacts of Artificial Intelligence (AI) on human labour and the environment? How do legislative proposals for regulating AI in Europe and Brazil respond to these impacts beyond discussions on surveillance and automated decision-making bias? By José Renato Laranjeira de Pereira and Thiago Guimarães Moraes
Promoting irresponsible AI: lessons from a Brazilian bill Commentary In the following months, the Brazilian Senate will vote on a 10-article bill establishing principles for the development and deployment of artificial intelligence (AI) systems. Its content, however, may help perpetuate recent cases of algorithmic discrimination through provisions that hinder the accountability for AI-induced errors and restrict the scope of rights established in Brazil’s General Data Protection Legislation and in the Brazilian Constitution. By José Renato Laranjeira de Pereira and Thiago Guimarães Moraes
Artificial Intelligence: “Talk about an AI divide between the US and the EU is exaggerated” Interview A Bill of Rights for the AI-enabled world, regulatory challenges, and socio-technical risks: Jessica Newman, who leads the AI Security Initiative at UC Berkeley’s Center for Long-Term Cybersecurity, discusses recent AI developments in the United States and Europe with our Transatlantic Media Fellow Ekaterina Venkina in an interview for BigData-Insider. By Ekaterina Venkina
“We need to be careful what we optimize our AI systems for” Interview How do we preserve our humanity in a world of intelligent machines? AI researcher Mark Nitzberg on the need to build AI models that are safe for humans and make explainable decisions – and why standards and oversight are key. Our fellow Ekaterina Venkina interviewed the Executive Director at the Center for Human-Compatible AI at UC Berkeley (CHAI) for RedaktionsNetzwerk Deutschland (RND). By Ekaterina Venkina
AI and Elections – Observations, Analyses and Prospects Spotlight This Spotlight explores how the ability of AI to disseminate information more effectively is prone to abuse and can pose a threat to democracy. It then discusses the preconditions and potential of AI to support the building of a critical public sphere.
The EU's Artificial Intelligence Act: Should some applications of AI be beyond the pale? Commentary The European Union’s Artificial Intelligence Act aims to regulate emerging applications of AI in accordance with “EU values”. But for the most concerning of all such potential applications, the line between regulation and prohibition can be a tricky one to draw. By Alexandre Erler
Artificial Intelligence and Democracy Background This backgrounder introduces readers to the research literature on AI’s impact on democracy. It surveys literature in three distinct areas: AI and the democratic public sphere, the impact of AI on election campaigns, and the importance and accountability of automated decision-making systems in public services.
The gender gap in AI Explainer The field of Artificial Intelligence (AI) has grown exponentially as the world is increasingly being built around automated systems and ‘smart’ machines. Yet the people whose work underpin this are far from representative of the society these systems are meant to serve. There is an extensive gender gap in AI, with a significant absence of women working in the data and AI fields globally.
The platform economy Dossier The major platform providers have become decisive players on the internet - not only as critical information infrastructures, but also at content level. They moderate, they curate content, and block accounts based on rules they set themselves. We ask: How do private companies influence the public debate, and how can they be democratically scrutinised?
Cross-Cultural Values defining AI Governing Principles Spotlight Like all iterations of technological progress that preceded it, artificial intelligence is no exception to the rule that technology itself is inherently neither bad or good. This Spotlight analysis reflects on the question of how to evaluate AI-based technologies that could improve citizens' quality of life and consumer experience, but that are vulnerable to being abused.