Explainable AI: The Effect of Contradictory Decisions and Explanations on Users' Acceptance of AI Systems

被引:13
作者
Ebermann, Carolin [1 ]
Selisky, Matthias [1 ]
Weibelzahl, Stephan [1 ]
机构
[1] PFH Private Univ Appl Sci, Business Psychol, Gottingen, Germany
关键词
ARTIFICIAL-INTELLIGENCE; COGNITIVE-DISSONANCE; INFORMATION-SYSTEMS; EMPIRICAL-ASSESSMENT; ATTITUDE-CHANGE; EXPERT-SYSTEMS; ALGORITHM; DESIGN; MODEL; AROUSAL;
D O I
10.1080/10447318.2022.2126812
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Providing explanations of an artificial intelligence (AI) system has been suggested as a means to increase users' acceptance during the decision-making process. However, little research has been done to examine the psychological mechanism of how these explanations cause a positive or negative reaction in the user. To address this gap, we investigate the effect on user acceptance if decisions and the associated provided explanations contradict between an AI system and the user. An interdisciplinary research model was derived and validated by an experiment with 78 participants. Findings suggest that in decision situations with cognitive misfit users experience negative mood significantly more often and have a negative evaluation of the AI system's support. Therefore, the following article provides further guidance regarding new interdisciplinary approaches for dealing with human-AI interaction during the decision-making process and sheds some light on how explainable AI can increase users' acceptance of such systems.
引用
收藏
页码:1807 / 1826
页数:20
相关论文
共 50 条
  • [1] Putting explainable AI in context: institutional explanations for medical AI
    Theunissen, Mark
    Browning, Jacob
    ETHICS AND INFORMATION TECHNOLOGY, 2022, 24 (02)
  • [2] Build confidence and acceptance of AI-based decision support systems - Explainable and liable AI
    Nicodeme, Claire
    2020 13TH INTERNATIONAL CONFERENCE ON HUMAN SYSTEM INTERACTION (HSI), 2020, : 20 - 23
  • [3] Collaborative Explainable AI: A Non-algorithmic Approach to Generating Explanations of AI
    Mamun, Tauseef Ibne
    Hoffman, Robert R.
    Mueller, Shane T.
    HCI INTERNATIONAL 2021 - LATE BREAKING POSTERS, HCII 2021, PT I, 2021, 1498 : 144 - 150
  • [4] A Methodological Framework for Business Decisions with Explainable AI and the Analytic Hierarchical Process
    Marin Diaz, Gabriel
    Medina, Raquel Gomez
    Jimenez, Jose Alberto Aijon
    PROCESSES, 2025, 13 (01)
  • [5] Towards Human-Centered Explainable AI: A Survey of User Studies for Model Explanations
    Rong, Yao
    Leemann, Tobias
    Nguyen, Thai-Trang
    Fiedler, Lisa
    Qian, Peizhu
    Unhelkar, Vaibhav
    Seidel, Tina
    Kasneci, Gjergji
    Kasneci, Enkelejda
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2024, 46 (04) : 2104 - 2122
  • [6] Effect of AI Explanations on Human Perceptions of Patient-Facing AI-Powered Healthcare Systems
    Zhan Zhang
    Yegin Genc
    Dakuo Wang
    Mehmet Eren Ahsen
    Xiangmin Fan
    Journal of Medical Systems, 2021, 45
  • [7] Using explainable AI to unravel classroom dialogue analysis: Effects of explanations on teachers' trust, technology acceptance and cognitive load
    Wang, Deliang
    Bian, Cunling
    Chen, Gaowei
    BRITISH JOURNAL OF EDUCATIONAL TECHNOLOGY, 2024, 55 (06) : 2530 - 2556
  • [8] Effect of AI Explanations on Human Perceptions of Patient-Facing AI-Powered Healthcare Systems
    Zhang, Zhan
    Genc, Yegin
    Wang, Dakuo
    Ahsen, Mehmet Eren
    Fan, Xiangmin
    JOURNAL OF MEDICAL SYSTEMS, 2021, 45 (06)
  • [9] Promising AI Applications in Power Systems: Explainable AI (XAI), Transformers, LLMs
    Lukianykhin, Oleh
    Shendryk, Vira
    Shendryk, Sergii
    Malekian, Reza
    NEW TECHNOLOGIES, DEVELOPMENT AND APPLICATION VII, VOL 2, NT-2024, 2024, 1070 : 66 - 76
  • [10] The effects of explainability and causability on perception, trust, and acceptance: Implications for explainable AI
    Shin, Donghee
    INTERNATIONAL JOURNAL OF HUMAN-COMPUTER STUDIES, 2021, 146