Exploring gender biases in ML and AI academic research through systematic literature review

被引:34
作者
Shrestha, Sunny [1 ]
Das, Sanchari [1 ]
机构
[1] Univ Denver, Inspirit Lab, Denver, CO 80208 USA
来源
FRONTIERS IN ARTIFICIAL INTELLIGENCE | 2022年 / 5卷
关键词
machine learning; gender bias; SOK; inclusivity and diversity; artificial intelligence; recommender systems; ARTIFICIAL-INTELLIGENCE;
D O I
10.3389/frai.2022.976838
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Automated systems that implement Machine learning (ML) and Artificial Intelligence (AI) algorithms present promising solutions to a variety of technological and non-technological issues. Although, industry leaders are rapidly adopting these systems for anything from marketing to national defense operations, these systems are not without flaws. Recently, many of these systems are found to inherit and propagate gender and racial biases that disadvantages the minority population. In this paper, we analyze academic publications in the area of gender biases in ML and AI algorithms thus outlining different themes, mitigation and detection methods explored through research in this topic. Through a detailed analysis of N = 120 papers, we map the current research landscape on gender specific biases present in ML and AI assisted automated systems. We further point out the aspects of ML/AI gender biases research that are less explored and require more attention. Mainly we focus on the lack of user studies and inclusivity in this field of study. We also shed some light into the gender bias issue as experienced by the algorithm designers. In conclusion, in this paper we provide a holistic view of the breadth of studies conducted in the field of exploring, detecting and mitigating gender biases in ML and AI systems and, a future direction for the studies to take in order to provide a fair and accessible ML and AI systems to all users.
引用
收藏
页数:17
相关论文
共 114 条
[1]   Black Box Fairness Testing of Machine Learning Models [J].
Aggarwal, Aniya ;
Lohia, Pranay ;
Nagar, Seema ;
Dey, Kuntal ;
Saha, Diptikalyan .
ESEC/FSE'2019: PROCEEDINGS OF THE 2019 27TH ACM JOINT MEETING ON EUROPEAN SOFTWARE ENGINEERING CONFERENCE AND SYMPOSIUM ON THE FOUNDATIONS OF SOFTWARE ENGINEERING, 2019, :625-635
[2]   Guidelines for Human-AI Interaction [J].
Amershi, Saleema ;
Weld, Dan ;
Vorvoreanu, Mihaela ;
Fourney, Adam ;
Nushi, Besmira ;
Collisson, Penny ;
Suh, Jina ;
Iqbal, Shamsi ;
Bennett, Paul N. ;
Inkpen, Kori ;
Teevan, Jaime ;
Kikin-Gil, Ruth ;
Horvitz, Eric .
CHI 2019: PROCEEDINGS OF THE 2019 CHI CONFERENCE ON HUMAN FACTORS IN COMPUTING SYSTEMS, 2019,
[3]   "WhatWe Can't Measure, We Can't Understand": Challenges to Demographic Data Procurement in the Pursuit of Fairness [J].
Andrus, McKane ;
Spitzer, Elena ;
Brown, Jeffrey ;
Xiang, Alice .
PROCEEDINGS OF THE 2021 ACM CONFERENCE ON FAIRNESS, ACCOUNTABILITY, AND TRANSPARENCY, FACCT 2021, 2021, :249-260
[4]   Quantifying Gender Bias in Different Corpora [J].
Babaeianjelodar, Marzieh ;
Lorenz, Stephen ;
Gordon, Josh ;
Matthews, Jeanna ;
Freitag, Evan .
WWW'20: COMPANION PROCEEDINGS OF THE WEB CONFERENCE 2020, 2020, :752-759
[5]  
Baeza-Yates R., 2020, BIAS SEARCH RECOMMEN, DOI [10.1145/3383313.3418435, DOI 10.1145/3383313.3418435]
[6]  
Balakrishnan G., 2021, CAUSAL BENCHMARKING, DOI [10.1007/978-3-030-74697-1_15, DOI 10.1007/978-3-030-74697-1_15]
[7]   Algorithms for Fair Team Formation in Online Labour Marketplaces [J].
Barnabo, Giorgio ;
Fazzone, Adriano ;
Leonardi, Stefano ;
Schwiegelshohn, Chris .
COMPANION OF THE WORLD WIDE WEB CONFERENCE (WWW 2019 ), 2019, :484-490
[8]  
Bender E. M., 2018, Transactions of the Association for Computational Linguistics, V6, P587, DOI [10.1162/tacla00041, DOI 10.1162/TACL_A_00041]
[9]   Fairness-Aware Machine Learning: Practical Challenges and Lessons Learned [J].
Bird, Sarah ;
Kenthapadi, Krishnaram ;
Kiciman, Emre ;
Mitchell, Margaret .
PROCEEDINGS OF THE TWELFTH ACM INTERNATIONAL CONFERENCE ON WEB SEARCH AND DATA MINING (WSDM'19), 2019, :834-835
[10]  
Blodgett SL, 2020, Arxiv, DOI [arXiv:2005.14050, 10.48550/arXiv.2005.14050, DOI 10.48550/ARXIV.2005.14050]