How AI developers can assure algorithmic fairness

被引:0
作者
Xivuri K. [1 ]
Twinomurinzi H. [1 ]
机构
[1] Centre for Applied Data Science, University of Johannesburg, Johannesburg
来源
Discover Artificial Intelligence | 2023年 / 3卷 / 01期
关键词
AI developers; Algorithms; Artificial intelligence (AI); Domination-free development environment; Fairness; Jurgen Habermas; Process model;
D O I
10.1007/s44163-023-00074-4
中图分类号
学科分类号
摘要
Artificial intelligence (AI) has rapidly become one of the technologies used for competitive advantage. However, there are also growing concerns about bias in AI models as AI developers risk introducing bias both unintentionally and intentionally. This study, using a qualitative approach, investigated how AI developers can contribute to the development of fair AI models. The key findings reveal that the risk of bias is mainly because of the lack of gender and social diversity in AI development teams, and haste from AI managers to deliver much-anticipated results. The integrity of AI developers is also critical as they may conceal bias from management and other AI stakeholders. The testing phase before model deployment risks bias because it is rarely representative of the diverse societal groups that may be affected. The study makes practical recommendations in four main areas: governance, social, technical, and training and development processes. Responsible organisations need to take deliberate actions to ensure that their AI developers adhere to fair processes when developing AI; AI developers must prioritise ethical considerations and consider the impact their models may have on society; partnerships between AI developers, AI stakeholders, and society that might be impacted by AI models should be established; and AI developers need to prioritise transparency and explainability in their models while ensuring adequate testing for bias and corrective measures before deployment. Emotional intelligence training should also be provided to the AI developers to help them engage in productive conversations with individuals outside the development team. © The Author(s) 2023.
引用
收藏
相关论文
共 50 条
  • [41] Fairness of AI in Predicting the Risk of Recidivism: Review and Phase Mapping of AI Fairness Techniques
    Farayola, Michael Mayowa
    Tal, Irina
    Saber, Takfarinas
    Connolly, Regina
    Bendechache, Malika
    18TH INTERNATIONAL CONFERENCE ON AVAILABILITY, RELIABILITY & SECURITY, ARES 2023, 2023,
  • [42] The Algorithmic Leviathan: Arbitrariness, Fairness, and Opportunity in Algorithmic Decision-Making Systems
    Creel, Kathleen
    Hellman, Deborah
    CANADIAN JOURNAL OF PHILOSOPHY, 2022, 52 (01) : 26 - 43
  • [43] Robustness Implies Fairness in Causal Algorithmic Recourse
    Ehyaei, Ahmad-Reza
    Karimi, Amir-Hossein
    Schoelkopf, Bernhard
    Maghsudi, Setareh
    PROCEEDINGS OF THE 6TH ACM CONFERENCE ON FAIRNESS, ACCOUNTABILITY, AND TRANSPARENCY, FACCT 2023, 2023, : 984 - 1001
  • [44] An Invitation to System-Wide Algorithmic Fairness
    Cortes, Efren Cruz
    Ghosh, Debashis
    PROCEEDINGS OF THE 3RD AAAI/ACM CONFERENCE ON AI, ETHICS, AND SOCIETY AIES 2020, 2020, : 235 - 241
  • [45] The Fair Chances in Algorithmic Fairness: A Response to Holm
    Clinton Castro
    Michele Loi
    Res Publica, 2023, 29 : 331 - 337
  • [46] Understanding perception of algorithmic decisions: Fairness, trust, and emotion in response to algorithmic management
    Lee, Min Kyung
    BIG DATA & SOCIETY, 2018, 5 (01):
  • [47] Rawlsian AI fairness loopholes
    Anna Katrine Jørgensen
    Anders Søgaard
    AI and Ethics, 2023, 3 (4): : 1185 - 1192
  • [48] Towards Explainability for AI Fairness
    Zhou, Jianlong
    Chen, Fang
    Holzinger, Andreas
    XXAI - BEYOND EXPLAINABLE AI: International Workshop, Held in Conjunction with ICML 2020, July 18, 2020, Vienna, Austria, Revised and Extended Papers, 2022, 13200 : 375 - 386
  • [49] Explaining Algorithmic Fairness Through Fairness-Aware Causal Path Decomposition
    Pan, Weishen
    Cui, Sen
    Bian, Jiang
    Zhang, Changshui
    Wang, Fei
    KDD '21: PROCEEDINGS OF THE 27TH ACM SIGKDD CONFERENCE ON KNOWLEDGE DISCOVERY & DATA MINING, 2021, : 1287 - 1297
  • [50] Role of fairness, accountability, and transparency in algorithmic affordance
    Shin, Donghee
    Park, Yong Jin
    COMPUTERS IN HUMAN BEHAVIOR, 2019, 98 : 277 - 284