Administraciones inteligentes y acceso al código fuente y los algoritmos públicos. Conjurando riesgos de cajas negras decisionalesAccess to source code and algorithms held by public authorities. Dismissing risks of a black box decision-like

  1. María Estrella Gutiérrez David 1
  1. 1 Universidad Complutense de Madrid
    info

    Universidad Complutense de Madrid

    Madrid, España

    ROR 02p0gd045

Revue:
Derecom

ISSN: 1988-2629

Année de publication: 2021

Número: 31

Type: Article

D'autres publications dans: Derecom

Résumé

There is broad consensus that the use of AI systems by Governments must be transparent and ensure that citizens are able to understand how and why algorithmic decisions affecting them, individually or collectively, have been made. Transparency legislation can be a useful instrument for this purpose. There is a well-established doctrine issued by freedom information Authorities that the source code and algorithms (either deterministic or predictive) used by the Governments are «public information». Based on that premise, the comparative and domestic casuistic of the right of access to public information makes possible to identify some of the risks inherent in the use of AI systems by Governments: covert regulation and bugs; existence of black box decision-like; embedded biases and impairment of rights and freedoms. Notwithstanding the potential legal limits (e.g. public security, intellectual property) analysed in this paper, there is a wide consensus that the right of access does not always ensure full transparency and understanding of the algorithmic decision-making process, especially where black box models are implemented. Moreover, the idea itself of black box as an inability to understand how an algorithmic system produced an output is becoming expansive. In effect, cases where requests of access concerning the source code of deterministic algorithms is at dispute are showing to what extent neither the affected parties by an algorithmic decision nor the judge are able to understand how the system reached such decision. In addition, it has been found that there is no an exact match between the technical meaning of «algorithmic transparency» handled by the «XAI» and the legal meaning of «administrative transparency» typical of Public Law. On a lex ferenda basis, in order to achieve appropriate interpretability, explainability and justification for governmental decisions made or supported by AI systems and public scrutiny thereof, freedom of information legislation should determine relevant information to be publicly disclosed or accessed and ensure that technical documents related to the whole life-cycle of such AI systems are dully produced, kept and registered.