AI Fairness-From Machine Learning to Federated Learning

被引:1
作者
Patnaik, Lalit Mohan [1 ,5 ]
Wang, Wenfeng [2 ,3 ,4 ,5 ,6 ]
机构
[1] Natl Inst Adv Studies, Sch Humanities, Consciousness Studies Program, Bangalore 560012, India
[2] Shanghai Inst Technol, Res Inst Intelligent Engn & Data Applicat, Shanghai 201418, Peoples R China
[3] Chinese Acad Sci, Res Ctr Ecol & Environm Cent Asia, Urumqi 830011, Peoples R China
[4] Anand Int Coll Engn, Appl Nonlinear Sci Lab, Jaipur 391320, India
[5] London Inst Technol, ASE London CTI SCO, London CR26EQ, England
[6] IMT Inst, Sino Indian Joint Res Ctr AI & Robot, Bhubaneswar 752054, India
来源
CMES-COMPUTER MODELING IN ENGINEERING & SCIENCES | 2024年 / 139卷 / 02期
关键词
Formulation; evaluation; classification; constraints; imbalance; biases; BIAS;
D O I
10.32604/cmes.2023.029451
中图分类号
T [工业技术];
学科分类号
08 ;
摘要
This article reviews the theory of fairness in AI-from machine learning to federated learning, where the constraints on precision AI fairness and perspective solutions are also discussed. For a reliable and quantitative evaluation of AI fairness, many associated concepts have been proposed, formulated and classified. However, the inexplicability of machine learning systems makes it almost impossible to include all necessary details in the modelling stage to ensure fairness. The privacy worries induce the data unfairness and hence, the biases in the datasets for evaluating AI fairness are unavoidable. The imbalance between algorithms' utility and humanization has further reinforced such worries. Even for federated learning systems, these constraints on precision AI fairness still exist. A perspective solution is to reconcile the federated learning processes and reduce biases and imbalances accordingly.
引用
收藏
页码:1203 / 1215
页数:13
相关论文
共 62 条
[1]   Fair Clustering via Equitable Group Representations [J].
Abbasi, Mohsen ;
Bhaskara, Aditya ;
Venkatasubramanian, Suresh .
PROCEEDINGS OF THE 2021 ACM CONFERENCE ON FAIRNESS, ACCOUNTABILITY, AND TRANSPARENCY, FACCT 2021, 2021, :504-514
[2]   Bias on the Web [J].
Baeza-Yates, Ricardo .
COMMUNICATIONS OF THE ACM, 2018, 61 (06) :54-61
[3]  
Berk R, 2017, Arxiv, DOI arXiv:1706.02409
[4]  
Bird S., 2020, FAIRLEARN: A TOOLKIT FOR ASSESSING AND IMPROVING FAIRNESS IN AI
[5]  
Buolamwini Joy., 2018, P MACHINE LEARNING R, V81, P77
[6]  
Chan A., 2023, AI ETHICS, V3, P53
[7]   Algorithmic fairness in artificial intelligence for medicine and healthcare [J].
Chen, Richard J. ;
Wang, Judy J. ;
Williamson, Drew F. K. ;
Chen, Tiffany Y. ;
Lipkova, Jana ;
Lu, Ming Y. ;
Sahai, Sharifa ;
Mahmood, Faisal .
NATURE BIOMEDICAL ENGINEERING, 2023, 7 (06) :719-742
[8]  
Choi Y, 2020, AAAI CONF ARTIF INTE, V34, P10077
[9]   Algorithmic Decision Making and the Cost of Fairness [J].
Corbett-Davies, Sam ;
Pierson, Emma ;
Feller, Avi ;
Goel, Sharad ;
Huq, Aziz .
KDD'17: PROCEEDINGS OF THE 23RD ACM SIGKDD INTERNATIONAL CONFERENCE ON KNOWLEDGE DISCOVERY AND DATA MINING, 2017, :797-806
[10]   Extraneous factors in judicial decisions [J].
Danziger, Shai ;
Levav, Jonathan ;
Avnaim-Pesso, Liora .
PROCEEDINGS OF THE NATIONAL ACADEMY OF SCIENCES OF THE UNITED STATES OF AMERICA, 2011, 108 (17) :6889-6892