Human Perceptions of Fairness in Algorithmic Decision Making: A Case Study of Criminal Risk Prediction

被引:134
作者
Grgic-Hlaca, Nina [1 ]
Redmiles, Elissa M. [2 ]
Gummadi, Krishna P. [1 ]
Weller, Adrian [3 ]
机构
[1] Saarland Univ, MPI SWS, Saarbrucken, Germany
[2] Univ Maryland, College Pk, MD 20742 USA
[3] Univ Cambridge, Alan Turing Inst, Cambridge, England
来源
WEB CONFERENCE 2018: PROCEEDINGS OF THE WORLD WIDE WEB CONFERENCE (WWW2018) | 2018年
关键词
Algorithmic Fairness; Algorithmic Discrimination; Fairness in Machine Learning; Procedural Fairness; Fair Feature Selection;
D O I
10.1145/3178876.3186138
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
As algorithms are increasingly used to make important decisions that affect human lives, ranging from social benefit assignment to predicting risk of criminal recidivism, concerns have been raised about the fairness of algorithmic decision making. Most prior works on algorithmic fairness normatively prescribe how fair decisions ought to be made. In contrast, here, we descriptively survey users for how they perceive and reason about fairness in algorithmic decision making. A key contribution of this work is the framework we propose to understand why people perceive certain features as fair or unfair to be used in algorithms. Our framework identifies eight properties of features, such as relevance, volitionality and reliability, as latent considerations that inform people's moral judgments about the fairness of feature use in decision-making algorithms. We validate our framework through a series of scenario-based surveys with 576 people. We find that, based on a person's assessment of the eight latent properties of a feature in our exemplar scenario, we can accurately (> 85%) predict if the person will judge the use of the feature as fair. Our findings have important implications. At a high-level, we show that people's unfairness concerns are multi-dimensional and argue that future studies need to address unfairness concerns beyond discrimination. At a low-level, we find considerable disagreements in people's fairness judgments. We identify root causes of the disagreements, and note possible pathways to resolve them.
引用
收藏
页码:903 / 912
页数:10
相关论文
共 56 条
[1]  
Anderson E., 1999, Ethics
[2]  
Angwin J., PROPUBLICA
[3]  
[Anonymous], 2016, Ban the box, criminal records, and statistical discrimination: A field experiment
[4]  
[Anonymous], 2017, NIPS
[5]  
[Anonymous], 2010, RUNNING EXPT AMAZON
[6]  
[Anonymous], 2018, AAAI
[7]  
[Anonymous], AM EC REV
[8]  
[Anonymous], 2016, The Case for Process Fairness in Learning: Feature Selection for Fair Decision Making
[9]  
[Anonymous], 2018, AAAI
[10]  
[Anonymous], 2017, INT J DATA SCI ANAL