Strategies to improve fairness in artificial intelligence: A systematic literature review

被引:1
作者
Trigo, Antonio [1 ,2 ,3 ]
Stein, Nubia [2 ]
Belfo, Fernando Paulo [2 ,3 ]
机构
[1] Inst Univ Lisboa ISCTE IUL, ISTAR, Lisbon, Portugal
[2] Polytech Univ Coimbra, Coimbra, Portugal
[3] Polytech Univ Coimbra, CEOS PP Coimbra, Coimbra, Portugal
关键词
Artificial intelligence; fairness; fairness techniques; fairness metrics; systematic literature review; AI;
D O I
10.3233/EFI-240045
中图分类号
G40 [教育学];
学科分类号
040101 ; 120403 ;
摘要
Decisions based on artificial intelligence can reproduce biases or prejudices present in biased historical data and poorly formulated systems, presenting serious social consequences for underrepresented groups of individuals. This paper presents a systematic literature review of technical, feasible, and practicable solutions to improve fairness in artificial intelligence classified according to different perspectives: fairness metrics, moment of intervention (pre-processing, processing, or post-processing), research area, datasets, and algorithms used in the research. The main contribution of this paper is to establish common ground regarding the techniques to be used to improve fairness in artificial intelligence, defined as the absence of bias or discrimination in the decisions made by artificial intelligence systems.
引用
收藏
页码:323 / 346
页数:24
相关论文
共 54 条
[21]  
Friedler S A., 2016, On the (im)possibility of fairness, P1
[22]   A comparative study of fairness-enhancing interventions in machine learning [J].
Friedler, Sorelle A. ;
Scheidegger, Carlos ;
Venkatasubramanian, Suresh ;
Choudhary, Sonam ;
Hamilton, Evan P. ;
Roth, Derek .
FAT*'19: PROCEEDINGS OF THE 2019 CONFERENCE ON FAIRNESS, ACCOUNTABILITY, AND TRANSPARENCY, 2019, :329-338
[23]   Beyond bias and discrimination: redefining the Al ethics principle of fairness in healthcare machine-learning algorithms [J].
Giovanola, Benedetta ;
Tiribelli, Simona .
AI & SOCIETY, 2023, 38 (02) :549-563
[24]  
Hardt M, 2016, ADV NEUR IN, V29
[25]   A Distributed Fair Machine Learning Framework with Private Demographic Data Protection [J].
Hu, Hui ;
Liu, Yijun ;
Wang, Zhen ;
Lan, Chao .
2019 19TH IEEE INTERNATIONAL CONFERENCE ON DATA MINING (ICDM 2019), 2019, :1102-1107
[26]   Data preprocessing techniques for classification without discrimination [J].
Kamiran, Faisal ;
Calders, Toon .
KNOWLEDGE AND INFORMATION SYSTEMS, 2012, 33 (01) :1-33
[27]   Adaptive Sensitive Reweighting to Mitigate Bias in Fairness-aware Classification [J].
Krasanakis, Emmanouil ;
Spyromitros-Xioufis, Eleftherios ;
Papadopoulos, Symeon ;
Kompatsiaris, Yiannis .
WEB CONFERENCE 2018: PROCEEDINGS OF THE WORLD WIDE WEB CONFERENCE (WWW2018), 2018, :853-862
[28]  
Lin Y T., 2021, Philosophy Technology, V34, P65, DOI DOI 10.1007/S13347-020-00406-7
[29]   A Survey on Bias and Fairness in Machine Learning [J].
Mehrabi, Ninareh ;
Morstatter, Fred ;
Saxena, Nripsuta ;
Lerman, Kristina ;
Galstyan, Aram .
ACM COMPUTING SURVEYS, 2021, 54 (06)
[30]   Bias in data-driven artificial intelligence systems-An introductory survey [J].
Ntoutsi, Eirini ;
Fafalios, Pavlos ;
Gadiraju, Ujwal ;
Iosifidis, Vasileios ;
Nejdl, Wolfgang ;
Vidal, Maria-Esther ;
Ruggieri, Salvatore ;
Turini, Franco ;
Papadopoulos, Symeon ;
Krasanakis, Emmanouil ;
Kompatsiaris, Ioannis ;
Kinder-Kurlanda, Katharina ;
Wagner, Claudia ;
Karimi, Fariba ;
Fernandez, Miriam ;
Alani, Harith ;
Berendt, Bettina ;
Kruegel, Tina ;
Heinze, Christian ;
Broelemann, Klaus ;
Kasneci, Gjergji ;
Tiropanis, Thanassis ;
Staab, Steffen .
WILEY INTERDISCIPLINARY REVIEWS-DATA MINING AND KNOWLEDGE DISCOVERY, 2020, 10 (03)