Bias in Facial Recognition Technologies Used by Law Enforcement: Understanding the Causes and Searching for a Way Out

被引:2
|
作者
Limante, Agne [1 ]
机构
[1] Lithuanian Ctr Social Sci, Law Inst, Vilnius, Lithuania
关键词
facial recognition; facial recognition in law enforcement; artificial intelligence; AI bias; law enforcement; FACE RECOGNITION; RACE; ACCURATE;
D O I
10.1080/18918131.2023.2277581
中图分类号
D0 [政治学、政治理论];
学科分类号
0302 ; 030201 ;
摘要
In defining facial recognition technologies (FRT) as artificial intelligence, we highlight the objectivity of machine-assisted decision-making. This creates a false impression that the results produced by these technologies are free from mistakes our eyes or minds often make, and from stereotypes and prejudices we find difficult to overcome. However, AI is not an entirely distant technology or completely objective algorithm; it is a set of codes written by humans, and thus it follows the rules humans put into it. These rules often appear to be directly or indirectly biased or to nurture inequalities that our societies continue to uphold. Furthermore, the use of FRT is dependent on human discretion, as it is deployed and utilized by humans, thereby introducing further potential for human bias. This paper focuses on the challenges faced due to the fact that FRT, which are used by law enforcement authorities, can be affected by racial and other biases and prejudices, or may be deployed in a manner that raises grounds for bias and discrimination. It discusses how bias can be introduced into the FRT software and how it may manifest during usage of FRT, resulting in unwanted side effects negatively affecting certain population groups. The paper considers whether and how these challenges can be overcome, focusing on data and social perspectives.
引用
收藏
页码:115 / 134
页数:20
相关论文
共 7 条