Adaptive Sensitive Reweighting to Mitigate Bias in Fairness-aware Classification

被引:79
作者
Krasanakis, Emmanouil [1 ]
Spyromitros-Xioufis, Eleftherios [1 ]
Papadopoulos, Symeon [1 ]
Kompatsiaris, Yiannis [1 ]
机构
[1] CERTH ITI, Thessaloniki, Greece
来源
WEB CONFERENCE 2018: PROCEEDINGS OF THE WORLD WIDE WEB CONFERENCE (WWW2018) | 2018年
基金
欧盟地平线“2020”;
关键词
DISCRIMINATION;
D O I
10.1145/3178876.3186133
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
Machine learning bias and fairness have recently emerged as key issues due to the pervasive deployment of data-driven decision making in a variety of sectors and services. It has often been argued that unfair classifications can be attributed to bias in training data, but previous attempts to "repair" training data have led to limited success. To circumvent shortcomings prevalent in data repairing approaches, such as those that weight training samples of the sensitive group (e.g. gender, race, financial status) based on their misclassification error, we present a process that iteratively adapts training sample weights with a theoretically grounded model. This model addresses different kinds of bias to better achieve fairness objectives, such as trade-offs between accuracy and disparate impact elimination or disparate mistreatment elimination. We show that, compared to previous fairness-aware approaches, our methodology achieves better or similar trades-offs between accuracy and unfairness mitigation on real-world and synthetic datasets.
引用
收藏
页码:853 / 862
页数:10
相关论文
共 43 条
[1]  
[Anonymous], J OPTIM THEORY APPL
[2]  
[Anonymous], P 11 IEEE INT C DAT
[3]  
[Anonymous], ARXIV170108230
[4]  
[Anonymous], 2016, P 2016 SIAM INT C DA
[5]  
[Anonymous], ARXIV170300056
[6]  
[Anonymous], 2016, Big Data's Disparate Impact. clr
[7]  
[Anonymous], 2016, Inherent trade-offs in the fair determination of risk scores
[8]  
[Anonymous], ARXIV150607721
[9]  
[Anonymous], 2004, Optimization Online
[10]  
[Anonymous], 2012, TICS