Machine learning fairness notions: Bridging the gap with real-world applications

被引:43
作者
Makhlouf, Karima [1 ]
Zhioua, Sami [2 ]
Palamidessi, Catuscia [3 ]
机构
[1] Univ Quebec Montreal, Montreal, PQ, Canada
[2] Higher Coll Technol, Dubai, U Arab Emirates
[3] IPP, Ecole Polytech, INRIA, Paris, France
基金
欧洲研究理事会;
关键词
Fairness; Machine learning; Discrimination; Survey; Systemization of Knowledge (SoK); RISK; BIAS; IDENTIFICATION; PREDICTION; ALGORITHM; LEVEL;
D O I
10.1016/j.ipm.2021.102642
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Fairness emerged as an important requirement to guarantee that Machine Learning (ML) predictive systems do not discriminate against specific individuals or entire sub-populations, in particular, minorities. Given the inherent subjectivity of viewing the concept of fairness, several notions of fairness have been introduced in the literature. This paper is a survey that illustrates the subtleties between fairness notions through a large number of examples and scenarios. In addition, unlike other surveys in the literature, it addresses the question of "which notion of fairness is most suited to a given real-world scenario and why?". Our attempt to answer this question consists in (1) identifying the set of fairness-related characteristics of the real-world scenario at hand, (2) analyzing the behavior of each fairness notion, and then (3) fitting these two elements to recommend the most suitable fairness notion in every specific setup. The results are summarized in a decision diagram that can be used by practitioners and policy makers to navigate the relatively large catalog of ML fairness notions.
引用
收藏
页数:32
相关论文
共 104 条
[1]  
Agarwal A, 2019, PR MACH LEARN RES, V97
[2]  
Alao D., 2013, Computing, Information Systems and Development Informatics, V4
[3]  
Angwin J., 2016, Machine bias: There's software used across the country to predict future criminals, and it's biased against blacks
[4]  
[Anonymous], 2015, ARXIV151100148
[5]  
[Anonymous], 2017, ARXIV170204280
[6]  
[Anonymous], 2016, COMPAS RISK SCALES: DEMONSTRATING ACCURACY EQUITY AND PREDICTIVE PARITY
[7]  
[Anonymous], 2017, NIPS TUTORIAL
[8]  
Asuncion A., 2007, UCI machine learning repository
[9]  
Barocas S., 2019, FAIRNESS MACHINE LEA
[10]   Big Data's Disparate Impact [J].
Barocas, Solon ;
Selbst, Andrew D. .
CALIFORNIA LAW REVIEW, 2016, 104 (03) :671-732