Survey on fairness notions and related tensions*

被引:11
作者
Alves, Guilherme [1 ]
Bernier, Fabien [1 ]
Couceiro, Miguel [1 ]
Makhlouf, Karima [2 ]
Palamidessi, Catuscia [2 ]
Zhioua, Sami [2 ]
机构
[1] Univ Lorraine, CNRS, Inria NGE, LORIA, F-54000 Nancy, France
[2] Ecole Polytech, Inria, IPP, F-91120 Paris, France
基金
欧盟地平线“2020”; 欧洲研究理事会;
关键词
Fairness notion; Tension within fairness; Unfairness mitigation; DISCRIMINATION;
D O I
10.1016/j.ejdp.2023.100033
中图分类号
C93 [管理学];
学科分类号
12 ; 1201 ; 1202 ; 120202 ;
摘要
Automated decision systems are increasingly used to take consequential decisions in problems such as job hiring and loan granting with the hope of replacing subjective human decisions with objective machine learning (ML) algorithms. However, ML-based decision systems are prone to bias, which results in yet unfair decisions. Several notions of fairness have been defined in the literature to capture the different subtleties of this ethical and social concept (e.g., statistical parity, equal opportunity, etc.). Fairness requirements to be satisfied while learning models created several types of tensions among the different notions of fairness and other desirable properties such as privacy and classification accuracy. This paper surveys the commonly used fairness notions and discusses the tensions among them with privacy and accuracy. Different methods to address the fairness-accuracy trade-off (classified into four approaches, namely, pre-processing, in-processing, post-processing, and hybrid) are reviewed. The survey is consolidated with experimental analysis carried out on fairness benchmark datasets to illustrate the relationship between fairness measures and accuracy in real-world scenarios.
引用
收藏
页数:14
相关论文
共 67 条
[11]   Fairness in Criminal Justice Risk Assessments: The State of the Art [J].
Berk, Richard ;
Heidari, Hoda ;
Jabbari, Shahin ;
Kearns, Michael ;
Roth, Aaron .
SOCIOLOGICAL METHODS & RESEARCH, 2021, 50 (01) :3-44
[12]   LimeOut: An Ensemble Approach to Improve Process Fairness [J].
Bhargava, Vaishnavi ;
Couceiro, Miguel ;
Napoli, Amedeo .
ECML PKDD 2020 WORKSHOPS, 2020, 1323 :475-491
[13]  
Binkyte R., 2023, WORKSHOP ALGORITHMIC, P7
[14]  
Bollen KA, 1989, Structural Equation with Latent Variables
[15]   Building Classifiers with Independency Constraints [J].
Calders, Toon ;
Kamiran, Faisal ;
Pechenizkiy, Mykola .
2009 IEEE INTERNATIONAL CONFERENCE ON DATA MINING WORKSHOPS (ICDMW 2009), 2009, :13-18
[16]   Fair Prediction with Disparate Impact: A Study of Bias in Recidivism Prediction Instruments [J].
Chouldechova, Alexandra .
BIG DATA, 2017, 5 (02) :153-163
[17]   Emergent Unfairness in Algorithmic Fairness-Accuracy Trade-Off Research [J].
Cooper, A. Feder ;
Abrams, Ellen .
AIES '21: PROCEEDINGS OF THE 2021 AAAI/ACM CONFERENCE ON AI, ETHICS, AND SOCIETY, 2021, :46-54
[18]  
Corbett-Davies S, 2018, Arxiv, DOI [arXiv:1808.00023, DOI 10.48550/ARXIV.1808.00023]
[19]   Algorithmic Decision Making and the Cost of Fairness [J].
Corbett-Davies, Sam ;
Pierson, Emma ;
Feller, Avi ;
Goel, Sharad ;
Huq, Aziz .
KDD'17: PROCEEDINGS OF THE 23RD ACM SIGKDD INTERNATIONAL CONFERENCE ON KNOWLEDGE DISCOVERY AND DATA MINING, 2017, :797-806
[20]   On the Compatibility of Privacy and Fairness [J].
Cummings, Rachel ;
Gupta, Varun ;
Kimpara, Dhamma ;
Morgenstern, Jamie .
ADJUNCT PUBLICATION OF THE 27TH CONFERENCE ON USER MODELING, ADAPTATION AND PERSONALIZATION (ACM UMAP '19 ADJUNCT), 2019, :309-315