Empirical analysis of fairness-aware data segmentation

被引:0
作者
Okura, Seiji [1 ]
Mohri, Takao [1 ]
机构
[1] Fujitsu Ltd, Res Ctr AI Eth, Kawasaki, Kanagawa, Japan
来源
2022 IEEE INTERNATIONAL CONFERENCE ON DATA MINING WORKSHOPS, ICDMW | 2022年
关键词
fairness; machine learning; data segmentation; empirical analysis; BIAS;
D O I
10.1109/ICDMW58026.2022.00029
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Fairness in machine learning is a research area that is recently established, for mitigating bias of unfair models that treat unprivileged people unfavorably based on protected attributes. We want to take an approach for mitigating such bias based on the idea of data segmentation, that is, dividing data into segments where people should be treated similarly. Such an approach should be useful in the sense that the mitigation process itself is explainable for cases that similar people should be treated similarly. Although research on such cases exists, the question of effectiveness of data segmentation itself, however, remains to be answered. In this paper, we answer this question by empirically analyzing the experimental results of data segmentation by using two datasets, i.e., the UCI Adult dataset and the Kaggle Give me some credit (gmsc) dataset. We empirically show that (1) fairness can be controllable during training models by the way of dividing data into segments; more specifically, by selecting the attributes and setting the number of segments for adjusting statistics such as statistical parity of the segments and mutual information between the attributes, etc. (2) the effects of data segmentation is dependent on classifiers, and (3) there exist weak trade-offs between fairness and accuracy with regard to data segmentation.
引用
收藏
页码:155 / 162
页数:8
相关论文
共 50 条
[31]   A survey on datasets for fairness-aware machine learning [J].
Tai Le Quy ;
Roy, Arjun ;
Iosifidis, Vasileios ;
Zhang, Wenbin ;
Ntoutsi, Eirini .
WILEY INTERDISCIPLINARY REVIEWS-DATA MINING AND KNOWLEDGE DISCOVERY, 2022, 12 (03)
[32]   Fairness-Aware Crowdsourcing of IoT Energy Services [J].
Lakhdari, Abdallah ;
Bouguettaya, Athman .
SERVICE-ORIENTED COMPUTING (ICSOC 2021), 2021, 13121 :351-367
[33]   Fairness-Aware Active Learning for Decoupled Model [J].
Cao, Yiting ;
Lan, Chao .
2022 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2022,
[34]   Fairness-Aware Regression Robust to Adversarial Attacks [J].
Jin, Yulu ;
Lai, Lifeng .
IEEE TRANSACTIONS ON SIGNAL PROCESSING, 2023, 71 :4092-4105
[35]   Fairness-aware Recommendation with librec-auto [J].
Sonboli, Nasim ;
Burke, Robin ;
Liu, Zijun ;
Mansoury, Masoud .
RECSYS 2020: 14TH ACM CONFERENCE ON RECOMMENDER SYSTEMS, 2020, :594-596
[36]   FAIRNESS-AWARE CLIENT SELECTION FOR FEDERATED LEARNING [J].
Shi, Yuxin ;
Liu, Zelei ;
Shi, Zhuan ;
Yu, Han .
2023 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO, ICME, 2023, :324-329
[37]   Explaining Algorithmic Fairness Through Fairness-Aware Causal Path Decomposition [J].
Pan, Weishen ;
Cui, Sen ;
Bian, Jiang ;
Zhang, Changshui ;
Wang, Fei .
KDD '21: PROCEEDINGS OF THE 27TH ACM SIGKDD CONFERENCE ON KNOWLEDGE DISCOVERY & DATA MINING, 2021, :1287-1297
[38]   A Fairness-Aware Fusion Framework for Multimodal Cyberbullying Detection [J].
Alasadi, Jamal ;
Arunachalam, Ramanathan ;
Atrey, Pradeep K. ;
Singh, Vivek K. .
2020 IEEE SIXTH INTERNATIONAL CONFERENCE ON MULTIMEDIA BIG DATA (BIGMM 2020), 2020, :166-173
[39]   Fairness-Aware UAV-Assisted Data Collection in Mobile Wireless Sensor Networks [J].
Ma, Xiaoyan ;
Kacimi, Rahim ;
Dhaou, Riadh .
2016 INTERNATIONAL WIRELESS COMMUNICATIONS AND MOBILE COMPUTING CONFERENCE (IWCMC), 2016, :995-1001
[40]   FATP: Fairness-Aware Task Planning in Spatial Crowdsourcing [J].
Lan, Jing ;
Shao, Yu ;
Gao, Xiaofeng ;
Chen, Guihai .
2020 IEEE 13TH INTERNATIONAL CONFERENCE ON WEB SERVICES (ICWS 2020), 2020, :256-264