When to Explain? Exploring the Effects of Explanation Timing on User Perceptions and Trust in AI systems

被引:1
|
作者
Chen, Cheng [1 ]
Liao, Mengqi [2 ]
Sundar, S. Shyam [3 ]
机构
[1] Elon Univ, Sch Commun, Commun Design Dept, Elon, NC 27244 USA
[2] Univ Georgia, Grady Coll Journalism & Mass Commun, Dept Advertising & Publ Relat, Athens, GA USA
[3] Penn State Univ, Bellisario Coll Commun, Media Effects Res Lab, University Pk, PA USA
来源
PROCEEDINGS OF THE SECOND INTERNATIONAL SYMPOSIUM ON TRUSTWORTHY AUTONOMOUS SYSTEMS, TAS 2024 | 2024年
关键词
PERCEIVED USEFULNESS; MERE EXPOSURE; ROBOT; EXPERIENCE; AUTOMATION; EASE; BOX;
D O I
10.1145/3686038.3686066
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Explanations are believed to aid understanding of AI models, but do they affect users' perceptions and trust in AI, especially in the presence of algorithmic bias? If so, when should explanations be provided to optimally balance explainability and usability? To answer these questions, we conducted a user study (N = 303) exploring how explanation timing influences users' perception of trust calibration, understanding of the AI system, and user experience and user interface satisfaction under both biased and unbiased AI performance conditions. We found that pre-explanations seem most valuable when the AI shows bias in its performance, whereas post-explanations appear more favorable when the system is bias-free. Showing both pre-and post-explanations tends to result in higher perceived trust calibration regardless of bias, despite concerns about content redundancy. Implications for designing socially responsible, explainable, and trustworthy AI interfaces are discussed.
引用
收藏
页数:17
相关论文
共 50 条
  • [1] Explanation and trust: what to tell the user in security and AI?
    Wolter Pieters
    Ethics and Information Technology, 2011, 13 : 53 - 64
  • [2] Explanation and trust: what to tell the user in security and AI?
    Pieters, Wolter
    ETHICS AND INFORMATION TECHNOLOGY, 2011, 13 (01) : 53 - 64
  • [3] Trust Indicators and Explainable AI: A Study on User Perceptions
    Ribes, Delphine
    Henchoz, Nicolas
    Portier, Helene
    Defayes, Lara
    Thanh-Trung Phan
    Gatica-Perez, Daniel
    Sonderegger, Andreas
    HUMAN-COMPUTER INTERACTION, INTERACT 2021, PT II, 2021, 12933 : 662 - 671
  • [4] Effects of Fairness and Explanation on Trust in Ethical AI
    Angerschmid, Alessa
    Theuermann, Kevin
    Holzinger, Andreas
    Chen, Fang
    Zhou, Jianlong
    MACHINE LEARNING AND KNOWLEDGE EXTRACTION, CD-MAKE 2022, 2022, 13480 : 51 - 67
  • [5] Examining the effect of explanation on satisfaction and trust in AI diagnostic systems
    Lamia Alam
    Shane Mueller
    BMC Medical Informatics and Decision Making, 21
  • [6] Examining the effect of explanation on satisfaction and trust in AI diagnostic systems
    Alam, Lamia
    Mueller, Shane
    BMC MEDICAL INFORMATICS AND DECISION MAKING, 2021, 21 (01)
  • [7] Exploring User Expectations of Proactive AI Systems
    Meurisch, Christian
    Mihale-Wilson, Cristina A.
    Hawlitschek, Adrian
    Giger, Florian
    Mueller, Florian
    Hinz, Oliver
    Muehlhaeuser, Max
    PROCEEDINGS OF THE ACM ON INTERACTIVE MOBILE WEARABLE AND UBIQUITOUS TECHNOLOGIES-IMWUT, 2020, 4 (04):
  • [8] When AI moderates online content: effects of human collaboration and interactive transparency on user trust
    Molina, Maria D.
    Sundar, S. Shyam
    JOURNAL OF COMPUTER-MEDIATED COMMUNICATION, 2022, 27 (04)
  • [9] Who needs explanation and when? Juggling explainable AI and user epistemic uncertainty
    Jiang, Jinglu
    Kahai, Surinder
    Yang, Ming
    INTERNATIONAL JOURNAL OF HUMAN-COMPUTER STUDIES, 2022, 165
  • [10] A Study on Trust Building in AI Systems Through User Commitment
    Ogawa, Ryuichi
    Shima, Shigeyoshi
    Takemura, Toshihiko
    Fukuzumi, Shin-ichi
    HUMAN INTERFACE AND THE MANAGEMENT OF INFORMATION, HIMI 2023, PT I, 2023, 14015 : 557 - 567