Explainable AI: The Effect of Contradictory Decisions and Explanations on Users' Acceptance of AI Systems

被引:13
作者
Ebermann, Carolin [1 ]
Selisky, Matthias [1 ]
Weibelzahl, Stephan [1 ]
机构
[1] PFH Private Univ Appl Sci, Business Psychol, Gottingen, Germany
关键词
ARTIFICIAL-INTELLIGENCE; COGNITIVE-DISSONANCE; INFORMATION-SYSTEMS; EMPIRICAL-ASSESSMENT; ATTITUDE-CHANGE; EXPERT-SYSTEMS; ALGORITHM; DESIGN; MODEL; AROUSAL;
D O I
10.1080/10447318.2022.2126812
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Providing explanations of an artificial intelligence (AI) system has been suggested as a means to increase users' acceptance during the decision-making process. However, little research has been done to examine the psychological mechanism of how these explanations cause a positive or negative reaction in the user. To address this gap, we investigate the effect on user acceptance if decisions and the associated provided explanations contradict between an AI system and the user. An interdisciplinary research model was derived and validated by an experiment with 78 participants. Findings suggest that in decision situations with cognitive misfit users experience negative mood significantly more often and have a negative evaluation of the AI system's support. Therefore, the following article provides further guidance regarding new interdisciplinary approaches for dealing with human-AI interaction during the decision-making process and sheds some light on how explainable AI can increase users' acceptance of such systems.
引用
收藏
页码:1807 / 1826
页数:20
相关论文
共 50 条
  • [31] Explainable AI for Intrusion Detection Systems: LIME and SHAP Applicability on Multi-Layer Perceptron
    Gaspar, Diogo
    Silva, Paulo
    Silva, Catarina
    IEEE ACCESS, 2024, 12 : 30164 - 30175
  • [32] AI anthropomorphism and its effect on users' self-congruence and self-AI integration: A theoretical framework and research agenda
    Alabed, Amani
    Javornik, Ana
    Gregory-Smith, Diana
    TECHNOLOGICAL FORECASTING AND SOCIAL CHANGE, 2022, 182
  • [33] "That's (not) the output I expected!" On the role of end user expectations in creating explanations of AI systems
    Riveiro, Maria
    Thill, Serge
    ARTIFICIAL INTELLIGENCE, 2021, 298
  • [34] Identifying AI Opportunities in Donor Kidney Acceptance: Incremental Hierarchical Systems Engineering Approach
    Ashiku, Lirim
    Threlkeld, Richard
    Canfield, Casey
    Dagli, Cihan
    SYSCON 2022: THE 16TH ANNUAL IEEE INTERNATIONAL SYSTEMS CONFERENCE (SYSCON), 2022,
  • [35] Expectation management in AI: A framework for understanding stakeholder trust and acceptance of artificial intelligence systems
    Kinney, Marjorie
    Anastasiadou, Maria
    Naranjo-Zolotov, Mijail
    Santos, Vitor
    HELIYON, 2024, 10 (07)
  • [36] Reactions to immoral AI decisions: The moral deficit effect and its underlying mechanism
    Hu, Xiaoyong
    Li, Mufeng
    Wang, Dixin
    Yu, Feng
    CHINESE SCIENCE BULLETIN-CHINESE, 2024, 69 (11): : 1406 - 1416
  • [37] On Evaluating Black-Box Explainable AI Methods for Enhancing Anomaly Detection in Autonomous Driving Systems
    Nazat, Sazid
    Arreche, Osvaldo
    Abdallah, Mustafa
    SENSORS, 2024, 24 (11)
  • [38] Designing AI-augmented healthcare delivery systems for physician buy-in and patient acceptance
    Dai, Tinglong
    Tayur, Sridhar
    PRODUCTION AND OPERATIONS MANAGEMENT, 2022, 31 (12) : 4443 - 4451
  • [39] Predicting and understanding human action decisions during skillful joint-action using supervised machine learning and explainable-AI
    Auletta, Fabrizia
    Kallen, Rachel W.
    di Bernardo, Mario
    Richardson, Michael J.
    SCIENTIFIC REPORTS, 2023, 13 (01)
  • [40] Served or Exploited: The Impact of Data Capture Strategy on Users' Intention to Use Selected AI Systems
    Liu, Xianggang
    Shi, Zhanzhong
    JOURNAL OF CONSUMER BEHAVIOUR, 2025, 24 (02) : 1003 - 1016