Reports of errors committed in public contexts by facial recognition systems based on machine learning techniques have multiplied. Still, these systems have been increasingly used by the Brazilian public administration. Consequently, the following key problem is established: how can errors committed by facial recognition systems be prevented or mitigated when these systems are used for the elaboration and implementation of public policies? Guided by the understanding that algorithmic transparency is key to preventing and mitigating these errors, we empirically analysed whether, or not, the Brazilian General Data Protection Law (Lei Geral de Protecao de Dados Pessoais - LGPD, in the Portuguese acronym) has been used to promote this kind of transparency in situations in which facial recognition systems are employed. We circumscribed our study to the public transportation sector of 30 large Brazilian municipalities. To gather information, we sent a questionnaire to the municipal public agencies responsible for the public transportation system with questions about how the LGPD works in this public policy area. We used the Access to Information Law to do that. Upon legal analyses, we built an algorithmic transparency scale and found that, in the sector studied, the level of transparency is "Very Low" in most municipalities. This research finding indicates that the risk of lack of control over errors made by facial recognition systems is high. It suggests that the Brazilian public administration does not know how to use the systems in question ethically, and that this lack of knowledge may apply to other Artificial Intelligence systems.