Machine Learning Robustness, Fairness, and their Convergence

被引:9
作者
Lee, Jae-Gil [1 ]
Roh, Yuji [1 ]
Song, Hwanjun [2 ]
Whang, Steven Euijong [1 ]
机构
[1] Korea Adv Inst Sci & Technol, Daejeon, South Korea
[2] NAVER AI Lab, Seoul, South Korea
来源
KDD '21: PROCEEDINGS OF THE 27TH ACM SIGKDD CONFERENCE ON KNOWLEDGE DISCOVERY & DATA MINING | 2021年
关键词
D O I
10.1145/3447548.3470799
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Responsible AI becomes critical where robustness and fairness must be satisfied together. Traditionally, the two topics have been studied by different communities for different applications. Robust training is designed for noisy or poisoned data where image data is typically considered. In comparison, fair training primarily deals with biased data where structured data is typically considered. Nevertheless, robust training and fair training are fundamentally similar in considering that both of them aim at fixing the inherent flaws of real-world data. In this tutorial, we first cover state-of-the-art robust training techniques where most of the research is on combating various label noises. In particular, we cover label noise modeling, robust training approaches, and real-world noisy data sets. Then, proceeding to the related fairness literature, we discuss pre-processing, in-processing, and post-processing unfairness mitigation techniques, depending on whether the mitigation occurs before, during, or after the model training. Finally, we cover the recent trend emerged to combine robust and fair training in two flavors: the former is to make the fair training more robust (i.e., robust fair training), and the latter is to consider robustness and fairness as two equals to incorporate them into a holistic framework. This tutorial is indeed timely and novel because the convergence of the two topics is increasingly common, but yet to be addressed in tutorials. The tutors have extensive experience publishing papers in top-tier machine learning and data mining venues and developing machine learning platforms.
引用
收藏
页码:4046 / 4047
页数:2
相关论文
共 50 条
[41]   Melting contestation: insurance fairness and machine learning [J].
Barry, Laurence ;
Charpentier, Arthur .
ETHICS AND INFORMATION TECHNOLOGY, 2023, 25 (04)
[42]   Perceptions of the Fairness Impacts of Multiplicity in Machine Learning [J].
Meyer, Anna P. ;
Kim, Yea-Seul ;
D'Antoni, Loris ;
Albarghouthi, Aws .
PROCEEDINGS OF THE 2025 CHI CONFERENCE ON HUMAN FACTORS IN COMPUTING SYTEMS, CHI 2025, 2025,
[43]   Automatic Fairness Testing of Machine Learning Models [J].
Sharma, Arnab ;
Wehrheim, Heike .
TESTING SOFTWARE AND SYSTEMS, ICTSS 2020, 2020, 12543 :255-271
[44]   On The Impact of Machine Learning Randomness on Group Fairness [J].
Ganesh, Prakhar ;
Chang, Hongyan ;
Strobel, Martin ;
Shokri, Reza .
PROCEEDINGS OF THE 6TH ACM CONFERENCE ON FAIRNESS, ACCOUNTABILITY, AND TRANSPARENCY, FACCT 2023, 2023, :1789-1800
[45]   Design issues on robustness and convergence of iterative learning controller [J].
Lee, HS ;
Bien, Z .
INTELLIGENT AUTOMATION AND SOFT COMPUTING, 2002, 8 (02) :95-106
[46]   AI Fairness-From Machine Learning to Federated Learning [J].
Patnaik, Lalit Mohan ;
Wang, Wenfeng .
CMES-COMPUTER MODELING IN ENGINEERING & SCIENCES, 2024, 139 (02) :1203-1215
[47]   DISCRETE LEARNING CONTROL FOR ROBOTS - STRATEGY, CONVERGENCE AND ROBUSTNESS [J].
TSO, SK ;
MA, LYX .
INTERNATIONAL JOURNAL OF CONTROL, 1993, 57 (02) :273-291
[48]   Fairness and Machine Fairness [J].
Castro, Clinton ;
O'Brien, David ;
Schwan, Ben .
AIES '21: PROCEEDINGS OF THE 2021 AAAI/ACM CONFERENCE ON AI, ETHICS, AND SOCIETY, 2021, :446-446
[49]   Verifiable Fairness: Privacy-preserving Computation of Fairness for Machine Learning Systems [J].
Toreini, Ehsan ;
Mehrnezhad, Maryam ;
van Moorsel, Aad .
COMPUTER SECURITY. ESORICS 2023 INTERNATIONAL WORKSHOPS, CPS4CIP, PT II, 2024, 14399 :569-584
[50]   CosPer: An adaptive personalized approach for enhancing fairness and robustness of federated learning [J].
Ren, Pengcheng ;
Qi, Kaiyue ;
Li, Jialin ;
Yan, Tongjiang ;
Dai, Qiang .
INFORMATION SCIENCES, 2024, 675