Strategies to improve fairness in artificial intelligence: A systematic literature review

被引:1
作者
Trigo, Antonio [1 ,2 ,3 ]
Stein, Nubia [2 ]
Belfo, Fernando Paulo [2 ,3 ]
机构
[1] Inst Univ Lisboa ISCTE IUL, ISTAR, Lisbon, Portugal
[2] Polytech Univ Coimbra, Coimbra, Portugal
[3] Polytech Univ Coimbra, CEOS PP Coimbra, Coimbra, Portugal
关键词
Artificial intelligence; fairness; fairness techniques; fairness metrics; systematic literature review; AI;
D O I
10.3233/EFI-240045
中图分类号
G40 [教育学];
学科分类号
040101 ; 120403 ;
摘要
Decisions based on artificial intelligence can reproduce biases or prejudices present in biased historical data and poorly formulated systems, presenting serious social consequences for underrepresented groups of individuals. This paper presents a systematic literature review of technical, feasible, and practicable solutions to improve fairness in artificial intelligence classified according to different perspectives: fairness metrics, moment of intervention (pre-processing, processing, or post-processing), research area, datasets, and algorithms used in the research. The main contribution of this paper is to establish common ground regarding the techniques to be used to improve fairness in artificial intelligence, defined as the absence of bias or discrimination in the decisions made by artificial intelligence systems.
引用
收藏
页码:323 / 346
页数:24
相关论文
共 54 条
[1]   Attenuation of Human Bias in Artificial Intelligence: An Exploratory Approach [J].
Ahmed, Saad ;
Athyaab, Saif Ali ;
Muqtadeer, Shaik Abdul .
PROCEEDINGS OF THE 6TH INTERNATIONAL CONFERENCE ON INVENTIVE COMPUTATION TECHNOLOGIES (ICICT 2021), 2021, :557-563
[2]   A monitoring framework for transparency and fairness in big data platform [J].
Aslaoui Mokhtari, Karima ;
Benbernou, Salima ;
Ouziri, Mourad ;
Lahmar, Hakim ;
Younas, Muhammad .
CONCURRENCY AND COMPUTATION-PRACTICE & EXPERIENCE, 2021, 33 (23)
[3]  
Barocas S, 2019, Fairness and machine learning
[4]   Big Data's Disparate Impact [J].
Barocas, Solon ;
Selbst, Andrew D. .
CALIFORNIA LAW REVIEW, 2016, 104 (03) :671-732
[5]   AI Fairness 360: An extensible toolkit for detecting and mitigating algorithmic bias [J].
Bellamy, R. K. E. ;
Dey, K. ;
Hind, M. ;
Hoffman, S. C. ;
Houde, S. ;
Kannan, K. ;
Lohia, P. ;
Martino, J. ;
Mehta, S. ;
Mojsilovie, A. ;
Nagar, S. ;
Ramamurthy, K. Natesan ;
Richards, J. ;
Saha, D. ;
Sattigeri, P. ;
Singh, M. ;
Varshney, K. R. ;
Zhang, Y. .
IBM JOURNAL OF RESEARCH AND DEVELOPMENT, 2019, 63 (4-5)
[6]   Three naive Bayes approaches for discrimination-free classification [J].
Calders, Toon ;
Verwer, Sicco .
DATA MINING AND KNOWLEDGE DISCOVERY, 2010, 21 (02) :277-292
[7]  
Calmon Flavio P., 2017, Advances in Neural Information Processing Systems, 2017-Decem(Nips), VDecem, P3993
[8]  
Camp D, 2022, Machine Learning Cheat Sheet
[9]   Continuing the Conversation on How Structural Racial and Ethnic Inequalities Affect AI Biases [J].
Cardenas, Soraya ;
Vallejo-Cardenas, Serafin F. .
2019 IEEE INTERNATIONAL SYMPOSIUM ON TECHNOLOGY AND SOCIETY (ISTAS), 2019,
[10]   Fairness in Machine Learning: A Survey [J].
Caton, Simon ;
Haas, Christian .
ACM COMPUTING SURVEYS, 2024, 56 (07) :1-38