Requirements for Explainability and Acceptance of Artificial Intelligence in Collaborative Work

被引:6
作者
Theis, Sabine [1 ]
Jentzsch, Sophie [1 ]
Deligiannaki, Fotini [2 ]
Berro, Charles [2 ]
Raulf, Arne Peter [2 ]
Bruder, Carmen [3 ]
机构
[1] Inst Software Technol, D-51147 Cologne, Germany
[2] Inst AI Safety & Secur, Rathausallee 12, D-53757 St Augustin, Germany
[3] Inst Aerosp Med, Sportallee 5a, D-22335 Hamburg, Germany
来源
ARTIFICIAL INTELLIGENCE IN HCI, AI-HCI 2023, PT I | 2023年 / 14050卷
关键词
Artificial intelligence; Explainability; Acceptance; Safety-critical contexts; Air-traffic control; Structured literature analysis; Information needs; User requirement analysis; EXPLANATION; FRAMEWORK; INTERNET; MODELS; HEALTH; NEED; USER; AI;
D O I
10.1007/978-3-031-35891-3_22
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The increasing prevalence of Artificial Intelligence (AI) in safety-critical contexts such as air-traffic control leads to systems that are practical and efficient, and to some extent explainable to humans to be trusted and accepted. The present structured literature analysis examines n = 236 articles on the requirements for the explainability and acceptance of AI. Results include a comprehensive review of n = 48 articles on information people need to perceive an AI as explainable, the information needed to accept an AI, and representation and interaction methods promoting trust in an AI. Results indicate that the two main groups of users are developers who require information about the internal operations of the model and end users who require information about AI results or behavior. Users' information needs vary in specificity, complexity, and urgency and must consider context, domain knowledge, and the user's cognitive resources. The acceptance of AI systems depends on information about the system's functions and performance, privacy and ethical considerations, as well as goal-supporting information tailored to individual preferences and information to establish trust in the system. Information about the system's limitations and potential failures can increase acceptance and trust. Trusted interaction methods are human-like, including natural language, speech, text, and visual representations such as graphs, charts, and animations. Our results have significant implications for future human-centric AI systems being developed. Thus, they are suitable as input for further application-specific investigations of user needs.
引用
收藏
页码:355 / 380
页数:26
相关论文
共 119 条
  • [1] Adebayo J, 2018, ADV NEUR IN, V31
  • [2] THE THEORY OF PLANNED BEHAVIOR
    AJZEN, I
    [J]. ORGANIZATIONAL BEHAVIOR AND HUMAN DECISION PROCESSES, 1991, 50 (02) : 179 - 211
  • [3] Mining Semantic Knowledge Graphs to Add Explainability to Black Box Recommender Systems
    Alshammari, Mohammed
    Nasraoui, Olfa
    Sanders, Scott
    [J]. IEEE ACCESS, 2019, 7 : 110563 - 110579
  • [4] [Anonymous], 2017, Explaining Trained Neural Networks with Semantic Web Technologies: First Steps
  • [5] [Anonymous], 2020, APA dictionary of psychology online
  • [6] Restoring and attributing ancient texts using deep neural networks
    Assael, Yannis
    Sommerschield, Thea
    Shillingford, Brendan
    Bordbar, Mahyar
    Pavlopoulos, John
    Chatzipanagiotou, Marita
    Androutsopoulos, Ion
    Prag, Jonathan
    de Freitas, Nando
    [J]. NATURE, 2022, 603 (7900) : 280 - +
  • [7] Atkinson D.J., 1991, NASA JOHNS SPAC CONT
  • [8] Ontology summit 2019 communique: Explanations
    Baclawski, Kenneth
    Bennett, Mike
    Berg-Cross, Gary
    Fritzsche, Donna
    Sharma, Ravi
    Singer, Janet
    Sowa, John F.
    Sriram, Ram D.
    Underwood, Mark
    Whitten, David
    [J]. APPLIED ONTOLOGY, 2020, 15 (01) : 91 - 107
  • [9] Bano M, 2013, IEEE INT WORKS EMPIR, P24, DOI 10.1109/EmpiRE.2013.6615212
  • [10] Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
    Barredo Arrieta, Alejandro
    Diaz-Rodriguez, Natalia
    Del Ser, Javier
    Bennetot, Adrien
    Tabik, Siham
    Barbado, Alberto
    Garcia, Salvador
    Gil-Lopez, Sergio
    Molina, Daniel
    Benjamins, Richard
    Chatila, Raja
    Herrera, Francisco
    [J]. INFORMATION FUSION, 2020, 58 : 82 - 115