How AI developers can assure algorithmic fairness

被引:0
作者
Xivuri K. [1 ]
Twinomurinzi H. [1 ]
机构
[1] Centre for Applied Data Science, University of Johannesburg, Johannesburg
来源
Discover Artificial Intelligence | 2023年 / 3卷 / 01期
关键词
AI developers; Algorithms; Artificial intelligence (AI); Domination-free development environment; Fairness; Jurgen Habermas; Process model;
D O I
10.1007/s44163-023-00074-4
中图分类号
学科分类号
摘要
Artificial intelligence (AI) has rapidly become one of the technologies used for competitive advantage. However, there are also growing concerns about bias in AI models as AI developers risk introducing bias both unintentionally and intentionally. This study, using a qualitative approach, investigated how AI developers can contribute to the development of fair AI models. The key findings reveal that the risk of bias is mainly because of the lack of gender and social diversity in AI development teams, and haste from AI managers to deliver much-anticipated results. The integrity of AI developers is also critical as they may conceal bias from management and other AI stakeholders. The testing phase before model deployment risks bias because it is rarely representative of the diverse societal groups that may be affected. The study makes practical recommendations in four main areas: governance, social, technical, and training and development processes. Responsible organisations need to take deliberate actions to ensure that their AI developers adhere to fair processes when developing AI; AI developers must prioritise ethical considerations and consider the impact their models may have on society; partnerships between AI developers, AI stakeholders, and society that might be impacted by AI models should be established; and AI developers need to prioritise transparency and explainability in their models while ensuring adequate testing for bias and corrective measures before deployment. Emotional intelligence training should also be provided to the AI developers to help them engage in productive conversations with individuals outside the development team. © The Author(s) 2023.
引用
收藏
相关论文
共 50 条
  • [31] Artificial fairness? Trust in algorithmic police decision-making
    Hobson, Zoe
    Yesberg, Julia A.
    Bradford, Ben
    Jackson, Jonathan
    JOURNAL OF EXPERIMENTAL CRIMINOLOGY, 2023, 19 (01) : 165 - 189
  • [32] Multi-Stakeholder Dialogue for Policy Recommendations on Algorithmic Fairness
    Webb, Helena
    Koene, Ansgar
    Patel, Menisha
    Vallejos, Elvira Perez
    SMSOCIETY'18: PROCEEDINGS OF THE 9TH INTERNATIONAL CONFERENCE ON SOCIAL MEDIA AND SOCIETY, 2018, : 395 - 399
  • [33] Algorithmic Fairness and the Situated Dynamics of Justice
    Fazelpour, Sina
    Lipton, Zachary C.
    Danks, David
    CANADIAN JOURNAL OF PHILOSOPHY, 2022, 52 (01) : 44 - 60
  • [34] Algorithmic Fairness and the Social Welfare Function
    Mullainathan, Sendhil
    ACM EC'18: PROCEEDINGS OF THE 2018 ACM CONFERENCE ON ECONOMICS AND COMPUTATION, 2018, : 1 - 1
  • [35] Algorithmic Decision Making with Conditional Fairness
    Xu, Renzhe
    Cui, Peng
    Kuang, Kun
    Li, Bo
    Zhou, Linjun
    Shen, Zheyan
    Cui, Wei
    KDD '20: PROCEEDINGS OF THE 26TH ACM SIGKDD INTERNATIONAL CONFERENCE ON KNOWLEDGE DISCOVERY & DATA MINING, 2020, : 2125 - 2135
  • [36] Fairness, AI & recruitment
    Rigotti, Carlotta
    Fosch-Villaronga, Eduard
    COMPUTER LAW & SECURITY REVIEW, 2024, 53
  • [37] Artificial fairness? Trust in algorithmic police decision-making
    Zoë Hobson
    Julia A. Yesberg
    Ben Bradford
    Jonathan Jackson
    Journal of Experimental Criminology, 2023, 19 : 165 - 189
  • [38] Algorithmic indirect discrimination, fairness and harm
    Frej Klem Thomsen
    AI and Ethics, 2024, 4 (4): : 1023 - 1037
  • [39] How Process Experts Enable and Constrain Fairness in AI-Driven Hiring
    Cruz, Ignacio Fernandez
    INTERNATIONAL JOURNAL OF COMMUNICATION, 2024, 18 : 656 - 676
  • [40] Beyond ideals: why the (medical) AI industry needs to motivate behavioural change in line with fairness and transparency values, and how it can do it
    Liefgreen, Alice
    Weinstein, Netta
    Wachter, Sandra
    Mittelstadt, Brent
    AI & SOCIETY, 2024, 39 (05) : 2183 - 2199