Gender bias in AI-based decision-making systems: a systematic literature review

被引:10
作者
Nadeem, Ayesha [1 ]
Marjanovic, Olivera [1 ]
Abedin, Babak [2 ]
机构
[1] Univ Technol Sydney, Sydney, Australia
[2] Macquarie Univ, Sydney, Australia
关键词
Artificial Intelligence; Fairness; Gender Bias; HEALTH-CARE; INTELLIGENCE; FAIRNESS; FUTURE;
D O I
10.3127/ajis.v26i0.3835
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
The related literature and industry press suggest that artificial intelligence (AI)-based decision-making systems may be biased towards gender, which in turn impacts individuals and societies. The information system (IS) field has recognised the rich contribution of AI -based outcomes and their effects; however, there is a lack of IS research on the management of gender bias in AI-based decision-making systems and its adverse effects. Hence, the rising concern about gender bias in AI-based decision-making systems is gaining attention. In particular, there is a need for a better understanding of contributing factors and effective approaches to mitigating gender bias in AI-based decision-making systems. Therefore, this study contributes to the existing literature by conducting a Systematic Literature Review (SLR) of the extant literature and presenting a theoretical framework for the management of gender bias in AI-based decision-making systems. The SLR results indicate that the research on gender bias in AI-based decision-making systems is not yet well established, highlighting the great potential for future IS research in this area, as articulated in the paper. Based on this review, we conceptualise gender bias in AI-based decision-making systems as a socio-technical problem and propose a theoretical framework that offers a combination of technological, organisational, and societal approaches as well as four propositions to possibly mitigate the biased effects. Lastly, this paper considers future research on the management of gender bias in AI-based decision in the context.
引用
收藏
页码:33 / 34
页数:2
相关论文
共 86 条
  • [1] Agarwal P., 2020, Forbes
  • [2] Artifiicial Intelligence in Information Systems: State of the Art and Research Roadmap
    Agerfalk, Par J.
    Conboy, Kieran
    Crowston, Kevin
    Lundstrom, Jenny S. Z. Eriksson
    Jarvenpaa, Sirkka
    Ram, Sudha
    Mikalef, Patrick
    [J]. COMMUNICATIONS OF THE ASSOCIATION FOR INFORMATION SYSTEMS, 2022, 50 (01): : 420 - 438
  • [3] FairSight: Visual Analytics for Fairness in Decision Making
    Ahn, Yongsu
    Lin, Yu-Ru
    [J]. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS, 2020, 26 (01) : 1086 - 1095
  • [4] A Harm-Reduction Framework for Algorithmic Fairness
    Altman, Micah
    Wood, Alexandra
    Vayena, Effy
    [J]. IEEE SECURITY & PRIVACY, 2018, 16 (03) : 34 - 45
  • [5] [Anonymous], 2018, What is automated individual decision-making and profiling, P1
  • [6] Bandara W., 2011, 19 EUR C INF SYST EC
  • [7] Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
    Barredo Arrieta, Alejandro
    Diaz-Rodriguez, Natalia
    Del Ser, Javier
    Bennetot, Adrien
    Tabik, Siham
    Barbado, Alberto
    Garcia, Salvador
    Gil-Lopez, Sergio
    Molina, Daniel
    Benjamins, Richard
    Chatila, Raja
    Herrera, Francisco
    [J]. INFORMATION FUSION, 2020, 58 : 82 - 115
  • [8] DIGITAL FIRST: THE ONTOLOGICAL REVERSAL AND NEW CHALLENGES FOR INFORMATION SYSTEMS RESEARCH
    Baskerville, Richard L.
    Myers, Michael D.
    Yoo, Youngjin
    [J]. MIS QUARTERLY, 2020, 44 (02) : 509 - 523
  • [9] Bellamy R. K. E., 2018, J RES DEV, V1, P99
  • [10] Assessing risk, automating racism
    Benjamin, Ruha
    [J]. SCIENCE, 2019, 366 (6464) : 421 - 422