Challenges of responsible AI in practice: scoping review and recommended actions

被引:5
作者
Sadek, Malak [1 ]
Kallina, Emma [2 ,3 ]
Bohne, Thomas [2 ,3 ]
Mougenot, Celine [1 ]
Calvo, Rafael A. [1 ]
Cave, Stephen [2 ,3 ]
机构
[1] Imperial Coll London, Dyson Sch Design Engn, London, England
[2] Univ Cambridge, Leverhulme Ctr Future Intelligence, Cambridge, England
[3] Univ Cambridge, Cyber Human Lab, Cambridge, England
关键词
Artificial Intelligence; Responsible AI; Participatory AI; Human-centered AI; INTELLIGENCE;
D O I
10.1007/s00146-024-01880-9
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Responsible AI (RAI) guidelines aim to ensure that AI systems respect democratic values. While a step in the right direction, they currently fail to impact practice. Our work discusses reasons for this lack of impact and clusters them into five areas: (1) the abstract nature of RAI guidelines, (2) the problem of selecting and reconciling values, (3) the difficulty of operationalising RAI success metrics, (4) the fragmentation of the AI pipeline, and (5) the lack of internal advocacy and accountability. Afterwards, we introduce a number of approaches to RAI from a range of disciplines, exploring their potential as solutions to the identified challenges. We anchor these solutions in practice through concrete examples, bridging the gap between the theoretical considerations of RAI and on-the-ground processes that currently shape how AI systems are built. Our work considers the socio-technical nature of RAI limitations and the resulting necessity of producing socio-technical solutions.
引用
收藏
页码:199 / 215
页数:17
相关论文
共 118 条
  • [11] Explainability as a non-functional requirement: challenges and recommendations
    Chazette, Larissa
    Schneider, Kurt
    [J]. REQUIREMENTS ENGINEERING, 2020, 25 (04) : 493 - 514
  • [12] Chen J, 2021, ARXIV
  • [13] Cherubini, ETHICAL AUTONOMOUS A
  • [14] Chouldechova A., 2018, PMLR, P134
  • [15] Christian B., 2020, The alignment problem: Machine learning and human values
  • [16] Cognilytica, 2021, COMPR ETH FRAM
  • [17] Corporation M, 2022, RESPONSIBLE AI PRINC
  • [18] Costanza-Chock Sasha, 2018, Design Research Society, P529, DOI DOI 10.21606/DRS.2018.679
  • [19] There is a blind spot in AI research
    Crawford, Kate
    Calo, Ryan
    [J]. NATURE, 2016, 538 (7625) : 311 - 313
  • [20] Crawford Kate, 2021, Atlas of AI: Power, politics, and the planetary costs of artificial intelligence