How AI developers can assure algorithmic fairness

被引:0
作者
Xivuri K. [1 ]
Twinomurinzi H. [1 ]
机构
[1] Centre for Applied Data Science, University of Johannesburg, Johannesburg
来源
Discover Artificial Intelligence | 2023年 / 3卷 / 01期
关键词
AI developers; Algorithms; Artificial intelligence (AI); Domination-free development environment; Fairness; Jurgen Habermas; Process model;
D O I
10.1007/s44163-023-00074-4
中图分类号
学科分类号
摘要
Artificial intelligence (AI) has rapidly become one of the technologies used for competitive advantage. However, there are also growing concerns about bias in AI models as AI developers risk introducing bias both unintentionally and intentionally. This study, using a qualitative approach, investigated how AI developers can contribute to the development of fair AI models. The key findings reveal that the risk of bias is mainly because of the lack of gender and social diversity in AI development teams, and haste from AI managers to deliver much-anticipated results. The integrity of AI developers is also critical as they may conceal bias from management and other AI stakeholders. The testing phase before model deployment risks bias because it is rarely representative of the diverse societal groups that may be affected. The study makes practical recommendations in four main areas: governance, social, technical, and training and development processes. Responsible organisations need to take deliberate actions to ensure that their AI developers adhere to fair processes when developing AI; AI developers must prioritise ethical considerations and consider the impact their models may have on society; partnerships between AI developers, AI stakeholders, and society that might be impacted by AI models should be established; and AI developers need to prioritise transparency and explainability in their models while ensuring adequate testing for bias and corrective measures before deployment. Emotional intelligence training should also be provided to the AI developers to help them engage in productive conversations with individuals outside the development team. © The Author(s) 2023.
引用
收藏
相关论文
共 50 条
  • [21] Ethical Considerations in AI and ML: Addressing Bias, Fairness, and Accountability in Algorithmic Decision-Making
    Turner, Michael
    Wong, Emily
    CINEFORUM, 2024, 65 (03): : 144 - 147
  • [22] Predictive policing and algorithmic fairness
    Hung, Tzu-Wei
    Yen, Chun-Ping
    SYNTHESE, 2023, 201 (06)
  • [23] Predictive policing and algorithmic fairness
    Tzu-Wei Hung
    Chun-Ping Yen
    Synthese, 201
  • [24] On Algorithmic Fairness in Medical Practice
    Grote, Thomas
    Keeling, Geoff
    CAMBRIDGE QUARTERLY OF HEALTHCARE ETHICS, 2022, 31 (01) : 83 - 94
  • [25] Evaluating Intersectional Fairness in Algorithmic Decision Making Using Intersectional Differential Algorithmic Functioning
    Suk, Youmi
    Han, Kyung T.
    JOURNAL OF EDUCATIONAL AND BEHAVIORAL STATISTICS, 2024,
  • [26] Assessing trustworthy AI: Technical and legal perspectives of fairness in AI
    Kattnig, Markus
    Angerschmid, Alessa
    Reichel, Thomas
    Kern, Roman
    COMPUTER LAW & SECURITY REVIEW, 2024, 55
  • [27] How do fairness definitions fare? Testing public attitudes towards three algorithmic definitions of fairness in loan allocations
    Saxena, Nripsuta Ani
    Huang, Karen
    DeFilippis, Evan
    Radanovic, Goran
    Parkes, David C.
    Liu, Yang
    ARTIFICIAL INTELLIGENCE, 2020, 283
  • [28] Fairness in Healthcare AI
    Ahmad, Muhammad Aurangzeb
    Eckert, Carly
    Allen, Christine
    Kumar, Vikas
    Hu, Juhua
    Teredesai, Ankur
    2021 IEEE 9TH INTERNATIONAL CONFERENCE ON HEALTHCARE INFORMATICS (ICHI 2021), 2021, : 554 - 555
  • [29] Fairness in AI applications
    Shelley, Cameron
    2021 IEEE INTERNATIONAL SYMPOSIUM ON TECHNOLOGY AND SOCIETY (ISTAS21): TECHNOLOGICAL STEWARDSHIP & RESPONSIBLE INNOVATION, 2021,
  • [30] Fairness and algorithmic decision-making
    Giovanola, Benedetta
    Tiribelli, Simona
    TEORIA-RIVISTA DI FILOSOFIA, 2022, 42 (02): : 117 - 129