Uncertainty, risk and the use of algorithms in policy decisions: a case study on criminal justice in the USA

被引:37
作者
Hartmann, Kathrin [1 ]
Wenzelburger, Georg [1 ]
机构
[1] Tech Univ Kaiserslautern, Fachbereich Sozialwissensch Policy Anal & Polit O, Postfach 3049, D-67653 Kaiserslautern, Germany
关键词
Artificial intelligence; Big data; Public policy; Public administration; Criminal justice; JUDGMENT;
D O I
10.1007/s11077-020-09414-y
中图分类号
C93 [管理学]; D035 [国家行政管理]; D523 [行政管理]; D63 [国家行政管理];
学科分类号
12 ; 1201 ; 1202 ; 120202 ; 1204 ; 120401 ;
摘要
Algorithms are increasingly used in different domains of public policy. They help humans to profile unemployed, support administrations to detect tax fraud and give recidivism risk scores that judges or criminal justice managers take into account when they make bail decisions. In recent years, critics have increasingly pointed to ethical challenges of these tools and emphasized problems of discrimination, opaqueness or accountability, and computer scientists have proposed technical solutions to these issues. In contrast to these important debates, the literature on how these tools are implemented in the actual everyday decision-making process has remained cursory. This is problematic because the consequences of ADM systems are at least as dependent on the implementation in an actual decision-making context as on their technical features. In this study, we show how the introduction of risk assessment tools in the criminal justice sector on the local level in the USA has deeply transformed the decision-making process. We argue that this is mainly due to the fact that the evidence generated by the algorithm introduces a notion of statistical prediction to a situation which was dominated by fundamental uncertainty about the outcome before. While this expectation is supported by the case study evidence, the possibility to shift blame to the algorithm does seem much less important to the criminal justice actors.
引用
收藏
页码:269 / 287
页数:19
相关论文
共 61 条
[1]   Seeing without knowing: Limitations of the transparency ideal and its application to algorithmic accountability [J].
Ananny, Mike ;
Crawford, Kate .
NEW MEDIA & SOCIETY, 2018, 20 (03) :973-989
[2]  
Angwin J., 2016, Machine bias: Theres software used across the country to predict future criminals. and its biased against blacks
[3]  
[Anonymous], 2017, Fairness in criminal justice risk assessments: the state of the art
[4]  
[Anonymous], 2017, Algorithms in the criminal justice system: assessing the use of risk assessments in sentencing
[5]   Taming Deep Uncertainty: The Potential of Pragmatist Principles for Understanding and Improving Strategic Crisis Management [J].
Ansell, Chris ;
Boin, Arjen .
ADMINISTRATION & SOCIETY, 2019, 51 (07) :1079-1112
[6]   Big Data's Disparate Impact [J].
Barocas, Solon ;
Selbst, Andrew D. .
CALIFORNIA LAW REVIEW, 2016, 104 (03) :671-732
[7]   The social power of algorithms [J].
Beer, David .
INFORMATION COMMUNICATION & SOCIETY, 2017, 20 (01) :1-13
[8]   An impact assessment of machine learning risk forecasts on parole board decisions and recidivism [J].
Berk, Richard .
JOURNAL OF EXPERIMENTAL CRIMINOLOGY, 2017, 13 (02) :193-216
[9]   The New World of Crises and Crisis Management: Implications for Policymaking and Research [J].
Boin, Arjen .
REVIEW OF POLICY RESEARCH, 2009, 26 (04) :367-377
[10]  
Botelho J, 2011, PROCEEDING MINING DA, P1