Promoting irresponsible AI: lessons from a Brazilian bill

Commentary

In the following months, the Brazilian Senate will vote on a 10-article bill establishing principles for the development and deployment of artificial intelligence (AI) systems. Its content, however, may help perpetuate recent cases of algorithmic discrimination through provisions that hinder the accountability for AI-induced errors and restrict the scope of rights established in Brazil’s General Data Protection Legislation and in the Brazilian Constitution.

shutterstock_734058274.png

The Brazilian Senate is about to vote on a Legal Framework for Artificial Intelligence (Marco Legal da Inteligência Artificial, PL 21/2020) that may profoundly impact the exercise of fundamental rights in the country.

Approved by the Chamber of Deputies at the end of September 2021 under an emergency proceeding, the debate on the bill ignored the claims of experts and civil society organisations to address the high risks of the technology regarding fundamental rights. In contrast, Members of Congress delivered fervorous speeches on the positive impacts of AI in society, especially as a tool for efficiency and innovation.

Although seeming, at first sight, more as a mere set of broad guiding principles than a proposal regulating such a complex theme, PL 21/2020 has excessively generic provisions, and its 10 articles may severely weaken civil liability, consumer defence and data protection laws.

Obstacles for effective accountability concerning AI system failures

Among the most alarming provisions in the bill is Article 6 (VI), which has been the target of criticism from several highly recognised scholars in Brazil and civil society organisations, especially those part of the Brazilian Rights in the Network Coalition. It establishes a civil liability regime based on fault (which is called subjective liability under Brazilian law) for agents involved in the development and operation of AI systems.

This means that any individual who suffers damages by an AI system and wants to receive compensation for them will have to prove that there was a mistake committed by a specific agent in the machine’s life cycle and that he or she was either at fault or acted with negligence.

The provision goes against the Brazilian tort liability regime, which focuses on the adequate compensation of an individual. This means that the law deems it unnecessary, in many cases, for the individual to prove that there was any intention or negligence on the part of the manufacturer or supplier in the defect of a good in order to receive compensation. Such a rationale applies especially for the provision of services or goods that have a high risk of violating someone’s rights.

Furthermore, the high degree of autonomy and unpredictability of AI systems makes it challenging to identify the distinction between damages that result from direct human error and those triggered by the regular activity of the algorithm, which should also be subject to compensation.

In this sense, if approved, Article 6 (VI) will make reparation for damages caused by AI highly innocuous in Brazil. For example, this will place an enormous burden on individuals attempting to receive compensation for mistakes incurred by face recognition systems leading to unjust arrests by the police, something which has already occurred in Rio de Janeiro and other Brazilian states.

Innocuous provisions on non-discrimination and transparency

 Although the bill claims to promote a principles-based regulation inspired by the OECD's recommendations, many of the legislation’s proposed principles give rise to several controversies, and at least three of them require special attention.

The first is the non-discrimination principle, which states that AI will have to be developed and used in a way that merely mitigates the possibility of applying systems for illicit or abusive discriminatory purposes.

A second problematic provision, closely linked to the previous one, is the "pursuit of neutrality" principle. It consists of a mere recommendation that agents involved in the development and operation of artificial intelligence systems strive to identify and mitigate biases that are contrary to the determinations of current legislation. Once again, there is no obligation to achieve this goal.

These provisions reduce the application scope of the non-discrimination principle in the Brazilian Data Protection Legislation (LGPD), which prohibits personal data processing for illicit or abusive discriminatory purposes.

With a lack of obligation for guaranteeing fair, non-discriminative AI systems, the bill will make accountability for AI discriminatory biases even harder to be achieved. This scenario is particularly problematic in a country marked by systemic racism, where, among the detentions made by police with the use of face recognition in 2019 (recorded with data regarding skin colour), 90% of the individuals had darker skin tones.

A third principle which should be highlighted is the proposed “transparency principle”, which provides that systems’ transparency should only be demanded when there is a high risk regarding one’s fundamental rights.

This provision restricts the scope of a homonymous principle established by the Brazilian Data Protection Legislation (LGPD), according to which data subjects shall receive clear, precise and easily accessible information about the carrying out of the processing and the respective processing agents. It also affects the application of LGPD’s Article 21, § 1º, which provides that a data subject can request information on any personal data processing made by an automated decision-making system whenever it may affect one’s interests.

The LGPD provisions are fundamental for allowing individuals to know how their personal data are being processed by AI systems, as well as for what purposes and through which specific means. This is an important tool considering that AI is frequently used in an opaque manner, without meaningful information on how exactly it reaches its outputs. Despite that, with the AI Bill’s much stricter approach, Brazil will end up hampering data subjects in exercising their data protection legal rights.

Finally, it is worth noting how hard it would be to find similarities between the Brazilian Bill and the European Union’s AI Act. Whereas the EU’s proposal aims to be an extensive risk-based regulation with strong restrictions on the use of high-risk systems (though not free from criticism), the Brazilian Bill is composed of 10 generic articles and only mentions the risk-based approach as a mere recommendation.

The Brazilian AI Bill, as it currently is, gravely undermines the exercise of fundamental rights such as data protection, freedom of expression and equality, to say the least. It also fails to address the risks of AI, while at the same time facilitating a laissez-faire approach for the public and private sectors to develop, commercialise and operate systems that are far from trustworthy and human-centric. A thorough debate on the theme is necessary, with the participation of experts, civil society and groups that are the most affected by these technologies, something which has not happened with this Bill. Otherwise, Brazil risks becoming a playground for irresponsible agents to attempt against rights and freedoms without fearing for liability for their acts.

 

This opinion piece was drafted by the authors independently of the institutions they represent. The thoughts presented above do not reflect an institutional position, but only those of their authors.