Examining the effect of explanation on satisfaction and trust in AI diagnostic systems

被引:28
|
作者
Alam, Lamia [1 ]
Mueller, Shane [1 ]
机构
[1] Michigan Technol Univ, Houghton, MI 49931 USA
关键词
EXPERT SYSTEM;
D O I
10.1186/s12911-021-01542-6
中图分类号
R-058 [];
学科分类号
摘要
BackgroundArtificial Intelligence has the potential to revolutionize healthcare, and it is increasingly being deployed to support and assist medical diagnosis. One potential application of AI is as the first point of contact for patients, replacing initial diagnoses prior to sending a patient to a specialist, allowing health care professionals to focus on more challenging and critical aspects of treatment. But for AI systems to succeed in this role, it will not be enough for them to merely provide accurate diagnoses and predictions. In addition, it will need to provide explanations (both to physicians and patients) about why the diagnoses are made. Without this, accurate and correct diagnoses and treatments might otherwise be ignored or rejected.MethodIt is important to evaluate the effectiveness of these explanations and understand the relative effectiveness of different kinds of explanations. In this paper, we examine this problem across two simulation experiments. For the first experiment, we tested a re-diagnosis scenario to understand the effect of local and global explanations. In a second simulation experiment, we implemented different forms of explanation in a similar diagnosis scenario.ResultsResults show that explanation helps improve satisfaction measures during the critical re-diagnosis period but had little effect before re-diagnosis (when initial treatment was taking place) or after (when an alternate diagnosis resolved the case successfully). Furthermore, initial "global" explanations about the process had no impact on immediate satisfaction but improved later judgments of understanding about the AI. Results of the second experiment show that visual and example-based explanations integrated with rationales had a significantly better impact on patient satisfaction and trust than no explanations, or with text-based rationales alone. As in Experiment 1, these explanations had their effect primarily on immediate measures of satisfaction during the re-diagnosis crisis, with little advantage prior to re-diagnosis or once the diagnosis was successfully resolved.ConclusionThese two studies help us to draw several conclusions about how patient-facing explanatory diagnostic systems may succeed or fail. Based on these studies and the review of the literature, we will provide some design recommendations for the explanations offered for AI systems in the healthcare domain.
引用
收藏
页数:15
相关论文
共 50 条
  • [1] Examining the effect of explanation on satisfaction and trust in AI diagnostic systems
    Lamia Alam
    Shane Mueller
    BMC Medical Informatics and Decision Making, 21
  • [2] Chatbots and Citizen Satisfaction: Examining the Role of Trust in AI-Chatbots as a Moderating Variable
    El Gharbaoui, Ouissale
    El Boukhari, Hayat
    Salmi, Abdelkader
    TEM JOURNAL-TECHNOLOGY EDUCATION MANAGEMENT INFORMATICS, 2024, 13 (03): : 1825 - 1836
  • [3] Effect of Explanation Conceptualisations on Trust in AI-assisted Credibility Assessment
    Pareek, Saumya
    van Berkel, Niels
    Velloso, Eduardo
    Goncalves, Jorge
    Proceedings of the ACM on Human-Computer Interaction, 2024, 8 (CSCW2)
  • [4] Measures for explainable AI: Explanation goodness, user satisfaction, mental models, curiosity, trust, and human-AI performance
    Hoffman, Robert R.
    Mueller, Shane T.
    Klein, Gary
    Litman, Jordan
    FRONTIERS IN COMPUTER SCIENCE, 2023, 5
  • [5] Effects of Fairness and Explanation on Trust in Ethical AI
    Angerschmid, Alessa
    Theuermann, Kevin
    Holzinger, Andreas
    Chen, Fang
    Zhou, Jianlong
    MACHINE LEARNING AND KNOWLEDGE EXTRACTION, CD-MAKE 2022, 2022, 13480 : 51 - 67
  • [6] Life Satisfaction and Incumbent Voting: Examining the Mediating Effect of Trust in Government
    Jason Wei Jian Ng
    Santha Vaithilingam
    Grace H. Y. Lee
    Gary J. Rangel
    Journal of Happiness Studies, 2022, 23 : 2947 - 2967
  • [7] Life Satisfaction and Incumbent Voting: Examining the Mediating Effect of Trust in Government
    Ng, Jason Wei Jian
    Vaithilingam, Santha
    Lee, Grace H. Y.
    Rangel, Gary J.
    JOURNAL OF HAPPINESS STUDIES, 2022, 23 (06) : 2947 - 2967
  • [8] When to Explain? Exploring the Effects of Explanation Timing on User Perceptions and Trust in AI systems
    Chen, Cheng
    Liao, Mengqi
    Sundar, S. Shyam
    PROCEEDINGS OF THE SECOND INTERNATIONAL SYMPOSIUM ON TRUSTWORTHY AUTONOMOUS SYSTEMS, TAS 2024, 2024,
  • [9] Explanation and trust: what to tell the user in security and AI?
    Wolter Pieters
    Ethics and Information Technology, 2011, 13 : 53 - 64
  • [10] Explanation and trust: what to tell the user in security and AI?
    Pieters, Wolter
    ETHICS AND INFORMATION TECHNOLOGY, 2011, 13 (01) : 53 - 64