Explainable AI: The Effect of Contradictory Decisions and Explanations on Users' Acceptance of AI Systems

被引:13
作者
Ebermann, Carolin [1 ]
Selisky, Matthias [1 ]
Weibelzahl, Stephan [1 ]
机构
[1] PFH Private Univ Appl Sci, Business Psychol, Gottingen, Germany
关键词
ARTIFICIAL-INTELLIGENCE; COGNITIVE-DISSONANCE; INFORMATION-SYSTEMS; EMPIRICAL-ASSESSMENT; ATTITUDE-CHANGE; EXPERT-SYSTEMS; ALGORITHM; DESIGN; MODEL; AROUSAL;
D O I
10.1080/10447318.2022.2126812
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Providing explanations of an artificial intelligence (AI) system has been suggested as a means to increase users' acceptance during the decision-making process. However, little research has been done to examine the psychological mechanism of how these explanations cause a positive or negative reaction in the user. To address this gap, we investigate the effect on user acceptance if decisions and the associated provided explanations contradict between an AI system and the user. An interdisciplinary research model was derived and validated by an experiment with 78 participants. Findings suggest that in decision situations with cognitive misfit users experience negative mood significantly more often and have a negative evaluation of the AI system's support. Therefore, the following article provides further guidance regarding new interdisciplinary approaches for dealing with human-AI interaction during the decision-making process and sheds some light on how explainable AI can increase users' acceptance of such systems.
引用
收藏
页码:1807 / 1826
页数:20
相关论文
共 50 条
  • [21] The role of domain expertise in trusting and following explainable AI decision support systems
    Bayer, Sarah
    Gimpel, Henner
    Markgraf, Moritz
    JOURNAL OF DECISION SYSTEMS, 2022, 32 (01) : 110 - 138
  • [22] Proxy Tasks and Subjective Measures Can Be Misleading in Evaluating Explainable AI Systems
    Bucinca, Zana
    Lin, Phoebe
    Gajos, Krzysztof Z.
    Glassman, Elena L.
    PROCEEDINGS OF THE 25TH INTERNATIONAL CONFERENCE ON INTELLIGENT USER INTERFACES, IUI 2020, 2020, : 454 - 464
  • [23] Toward Explainable AI-Genetic Fuzzy Systems-A Use Case
    Pickering, Lynn
    Cohen, Kelly
    EXPLAINABLE AI AND OTHER APPLICATIONS OF FUZZY TECHNIQUES, NAFIPS 2021, 2022, 258 : 343 - 354
  • [24] When do you trust AI? The effect of number presentation detail on consumer trust and acceptance of AI recommendations
    Kim, Jungkeun
    Giroux, Marilyn
    Lee, Jacob C.
    PSYCHOLOGY & MARKETING, 2021, 38 (07) : 1140 - 1155
  • [25] The augmentation effect of artificial intelligence: can AI framing shape customer acceptance of AI-based services?
    Vorobeva, Darina
    Pinto, Diego Costa
    Antonio, Nuno
    Mattila, Anna S.
    CURRENT ISSUES IN TOURISM, 2024, 27 (10) : 1551 - 1571
  • [26] Letting AI make decisions for me: an empirical examination of hotel guests' acceptance of technology agency
    Morosan, Cristian
    Dursun-Cengizci, Aslihan
    INTERNATIONAL JOURNAL OF CONTEMPORARY HOSPITALITY MANAGEMENT, 2024, 36 (03) : 946 - 974
  • [27] Measuring the effect of mental workload and explanations on appropriate AI reliance using EEG
    Zhang, Zelun Tony
    Argin, Seniha Ketenci
    Bilen, Mustafa Baha
    Urgun, Dogan
    Deniz, Sencer Melih
    Liu, Yuanting
    Hassib, Mariam
    BEHAVIOUR & INFORMATION TECHNOLOGY, 2024,
  • [28] Forecasting Ghanaian Medical Library Users' Artificial Intelligence (AI) Technology's Acceptance and Use
    Gyesi, Kwesi
    Amponsah, Vivian
    Ankamah, Samuel
    BIBLIOS-REVISTA DE BIBLIOTECOLOGIA Y CIENCIAS DE LA INFORMACION, 2025, (88):
  • [29] Critical Thinking About Explainable AI (XAI) for Rule-Based Fuzzy Systems
    Mendel, Jerry M.
    Bonissone, Piero P.
    IEEE TRANSACTIONS ON FUZZY SYSTEMS, 2021, 29 (12) : 3579 - 3593
  • [30] XAI-IoT: An Explainable AI Framework for Enhancing Anomaly Detection in IoT Systems
    Namrita Gummadi, Anna
    Napier, Jerry C.
    Abdallah, Mustafa
    IEEE ACCESS, 2024, 12 : 71024 - 71054