Empirical analysis of fairness-aware data segmentation

被引:0
作者
Okura, Seiji [1 ]
Mohri, Takao [1 ]
机构
[1] Fujitsu Ltd, Res Ctr AI Eth, Kawasaki, Kanagawa, Japan
来源
2022 IEEE INTERNATIONAL CONFERENCE ON DATA MINING WORKSHOPS, ICDMW | 2022年
关键词
fairness; machine learning; data segmentation; empirical analysis; BIAS;
D O I
10.1109/ICDMW58026.2022.00029
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Fairness in machine learning is a research area that is recently established, for mitigating bias of unfair models that treat unprivileged people unfavorably based on protected attributes. We want to take an approach for mitigating such bias based on the idea of data segmentation, that is, dividing data into segments where people should be treated similarly. Such an approach should be useful in the sense that the mitigation process itself is explainable for cases that similar people should be treated similarly. Although research on such cases exists, the question of effectiveness of data segmentation itself, however, remains to be answered. In this paper, we answer this question by empirically analyzing the experimental results of data segmentation by using two datasets, i.e., the UCI Adult dataset and the Kaggle Give me some credit (gmsc) dataset. We empirically show that (1) fairness can be controllable during training models by the way of dividing data into segments; more specifically, by selecting the attributes and setting the number of segments for adjusting statistics such as statistical parity of the segments and mutual information between the attributes, etc. (2) the effects of data segmentation is dependent on classifiers, and (3) there exist weak trade-offs between fairness and accuracy with regard to data segmentation.
引用
收藏
页码:155 / 162
页数:8
相关论文
共 50 条
[21]   Fairness-aware feature selection: A causal path approach [J].
Zhang, Wenqiong ;
Li, Yun ;
Liu, Yue .
KNOWLEDGE-BASED SYSTEMS, 2025, 323
[22]   Fairness-aware recommendation with meta learning [J].
Oh, Hyeji ;
Kim, Chulyun .
SCIENTIFIC REPORTS, 2024, 14 (01)
[23]   Fairness-Aware Training of Decision Trees by Abstract Interpretation [J].
Ranzato, Francesco ;
Urban, Caterina ;
Zanella, Marco .
PROCEEDINGS OF THE 30TH ACM INTERNATIONAL CONFERENCE ON INFORMATION & KNOWLEDGE MANAGEMENT, CIKM 2021, 2021, :1508-1517
[24]   Can Fairness be Automated? Guidelines and Opportunities for Fairness-aware AutoML [J].
Weerts, Hilde ;
Pfisterer, Florian ;
Feurer, Matthias ;
Eggensperger, Katharina ;
Bergman, Edward ;
Awad, Noor ;
Vanschoren, Joaquin ;
Pechenizkiy, Mykola ;
Bischl, Bernd ;
Hutter, Frank .
JOURNAL OF ARTIFICIAL INTELLIGENCE RESEARCH, 2023, 79 :639-677
[25]   Fairness-aware Adaptive Network Link Prediction [J].
Kose, O. Deniz ;
Shen, Yanning .
2022 30TH EUROPEAN SIGNAL PROCESSING CONFERENCE (EUSIPCO 2022), 2022, :677-681
[26]   FAIR: Fairness-aware information retrieval evaluation [J].
Gao, Ruoyuan ;
Ge, Yingqiang ;
Shah, Chirag .
JOURNAL OF THE ASSOCIATION FOR INFORMATION SCIENCE AND TECHNOLOGY, 2022, 73 (10) :1461-1473
[27]   Fairness-aware Bandit-based Recommendation [J].
Huang, Wen ;
Labille, Kevin ;
Wu, Xintao ;
Lee, Dongwon ;
Heffernan, Neil .
2021 IEEE INTERNATIONAL CONFERENCE ON BIG DATA (BIG DATA), 2021, :1273-1278
[28]   MultiFDF: Multi-Community Clustering for Fairness-Aware Recommendation [J].
Wang, Bang ;
Song, Shipeng ;
Liu, Shenghao ;
Deng, Xianjun .
IEEE TRANSACTIONS ON COMPUTATIONAL SOCIAL SYSTEMS, 2023, 10 (06) :2959-2970
[29]   A survey on datasets for fairness-aware machine learning [J].
Tai Le Quy ;
Roy, Arjun ;
Iosifidis, Vasileios ;
Zhang, Wenbin ;
Ntoutsi, Eirini .
WILEY INTERDISCIPLINARY REVIEWS-DATA MINING AND KNOWLEDGE DISCOVERY, 2022, 12 (03)
[30]   Fairness-Aware Optimal Graph Filter Design [J].
Kose, O. Deniz ;
Mateos, Gonzalo ;
Shen, Yanning .
IEEE JOURNAL OF SELECTED TOPICS IN SIGNAL PROCESSING, 2024, 18 (02) :142-154