Artificial fairness? Trust in algorithmic police decision-making

被引:20
作者
Hobson, Zoe [1 ]
Yesberg, Julia A. [1 ]
Bradford, Ben [1 ]
Jackson, Jonathan [2 ,3 ]
机构
[1] UCL, Inst Global City Policing, Dept Secur & Crime Sci, 35 Tavistock Sq, London WC1H 9EZ, England
[2] London Sch Econ & Polit Sci, Dept Methodol, London, England
[3] Sydney Law Sch, Sydney, NSW, Australia
关键词
Algorithms; Fairness; Police decision-making; Technology; Trust; BODY-WORN CAMERAS; PROCEDURAL JUSTICE; PUBLIC SUPPORT; LEGITIMACY; COOPERATION;
D O I
10.1007/s11292-021-09484-9
中图分类号
DF [法律]; D9 [法律];
学科分类号
0301 ;
摘要
Objectives Test whether (1) people view a policing decision made by an algorithm as more or less trustworthy than when an officer makes the same decision; (2) people who are presented with a specific instance of algorithmic policing have greater or lesser support for the general use of algorithmic policing in general; and (3) people use trust as a heuristic through which to make sense of an unfamiliar technology like algorithmic policing. Methods An online experiment tested whether different decision-making methods, outcomes and scenario types affect judgements about the appropriateness and fairness of decision-making and the general acceptability of police use of this particular technology. Results People see a decision as less fair and less appropriate when an algorithm decides, compared to when an officer decides. Yet, perceptions of fairness and appropriateness were strong predictors of support for police use of algorithms, and being exposed to a successful use of an algorithm was linked, via trust in the decision made, to greater support for police use of algorithms. Conclusions Making decisions solely based on algorithms might damage trust, and the more police rely solely on algorithmic decision-making, the less trusting people may be in decisions. However, mere exposure to the successful use of algorithms seems to enhance the general acceptability of this technology.
引用
收藏
页码:165 / 189
页数:25
相关论文
共 70 条
[21]  
Dhasarathy A., 2020, GOVT TURN AI ALGORIT
[22]   Algorithm Aversion: People Erroneously Avoid Algorithms After Seeing Them Err [J].
Dietvorst, Berkeley J. ;
Simmons, Joseph P. ;
Massey, Cade .
JOURNAL OF EXPERIMENTAL PSYCHOLOGY-GENERAL, 2015, 144 (01) :114-126
[23]  
Ferguson AG, 2017, RISE BIG DATA POLICI, DOI DOI 10.18574/NYU/9781479854608.001.0001
[24]  
Fussey Pete., 2019, Independent Report on the London Metropolitan Police Service's Trial of Live Facial Recognition Technology
[25]   Justifying violence: legitimacy, ideology and public support for police use of force [J].
Gerber, Monica M. ;
Jackson, Jonathan .
PSYCHOLOGY CRIME & LAW, 2017, 23 (01) :79-95
[26]   HUMAN TRUST IN ARTIFICIAL INTELLIGENCE: REVIEW OF EMPIRICAL RESEARCH [J].
Glikson, Ella ;
Woolley, Anita Williams .
ACADEMY OF MANAGEMENT ANNALS, 2020, 14 (02) :627-660
[27]  
Grimshaw R., 2020, I RACISM POLICE ENTR
[28]  
Grzymek V., 2019, What Europe knows and thinks about algorithms: Results of a representative survey
[29]   Fair Process, Trust, and Cooperation: Moving Toward an Integrated Framework of Police Legitimacy [J].
Hamm, J. A. ;
Trinkner, R. ;
Carr, J. D. .
CRIMINAL JUSTICE AND BEHAVIOR, 2017, 44 (09) :1183-1212
[30]   Public satisfaction with police: Using procedural justice to improve police legitimacy [J].
Hinds, Lyn ;
Murphy, Kristina .
AUSTRALIAN AND NEW ZEALAND JOURNAL OF CRIMINOLOGY, 2007, 40 (01) :27-42