When to Explain? Exploring the Effects of Explanation Timing on User Perceptions and Trust in AI systems

被引:1
|
作者
Chen, Cheng [1 ]
Liao, Mengqi [2 ]
Sundar, S. Shyam [3 ]
机构
[1] Elon Univ, Sch Commun, Commun Design Dept, Elon, NC 27244 USA
[2] Univ Georgia, Grady Coll Journalism & Mass Commun, Dept Advertising & Publ Relat, Athens, GA USA
[3] Penn State Univ, Bellisario Coll Commun, Media Effects Res Lab, University Pk, PA USA
来源
PROCEEDINGS OF THE SECOND INTERNATIONAL SYMPOSIUM ON TRUSTWORTHY AUTONOMOUS SYSTEMS, TAS 2024 | 2024年
关键词
PERCEIVED USEFULNESS; MERE EXPOSURE; ROBOT; EXPERIENCE; AUTOMATION; EASE; BOX;
D O I
10.1145/3686038.3686066
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Explanations are believed to aid understanding of AI models, but do they affect users' perceptions and trust in AI, especially in the presence of algorithmic bias? If so, when should explanations be provided to optimally balance explainability and usability? To answer these questions, we conducted a user study (N = 303) exploring how explanation timing influences users' perception of trust calibration, understanding of the AI system, and user experience and user interface satisfaction under both biased and unbiased AI performance conditions. We found that pre-explanations seem most valuable when the AI shows bias in its performance, whereas post-explanations appear more favorable when the system is bias-free. Showing both pre-and post-explanations tends to result in higher perceived trust calibration regardless of bias, despite concerns about content redundancy. Implications for designing socially responsible, explainable, and trustworthy AI interfaces are discussed.
引用
收藏
页数:17
相关论文
共 50 条