Improving Trust in AI with Mitigating Confirmation Bias: Effects of Explanation Type and Debiasing Strategy for Decision-Making with Explainable AI

被引:14
作者
Ha, Taehyun [1 ]
Kim, Sangyeon [2 ]
机构
[1] Sejong Univ, Dept Data Sci, Seoul, South Korea
[2] Korea Univ, Inst Engn Res, Seoul, South Korea
关键词
Artificial intelligence; explanation; trust; satisfaction; cognitive bias;
D O I
10.1080/10447318.2023.2285640
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
With advancements in artificial intelligence (AI), explainable AI (XAI) has emerged as a promising tool for enhancing the explainability of complex machine learning models. However, the explanations generated by an XAI may lead to cognitive biases among human users. To address this problem, this study aims to investigate how to mitigate users' cognitive biases based on their individual characteristics. In the literature review, we found two factors that can be helpful in remedying biases: 1) debiasing strategies that have been reported to potentially reduce biases in users' decision-making via additional information or change in information delivery, and 2) explanation modality types. To examine these factors' effects, we conducted an experiment with a 4 (debiasing strategy) x 3 (explanation type) between-subject design. In the experiment, participants were exposed to an explainable interface that provides an AI's outcomes with explanatory information, and their behavioral and attitudinal responses were collected. Specifically, we statistically examined the effects of textual and visual explanations on users' trust and confirmation bias toward AI systems, considering the moderating effects of debiasing methods and watching time. The results demonstrated that textual explanations lead to higher trust in XAI systems compared to visual explanations. Moreover, we found that textual explanations are particularly beneficial for quick decision-makers to evaluate the outputs of AI systems. Next, the results indicated that the cognitive bias can be effectively mitigated by providing users with a priori information. These findings have theoretical and practical implications for designing AI-based decision support systems that can generate more trustworthy and equitable explanations.
引用
收藏
页码:8562 / 8573
页数:12
相关论文
共 37 条
[1]   A timely account of the role of duration in decision making [J].
Ariely, D ;
Zakay, D .
ACTA PSYCHOLOGICA, 2001, 108 (02) :187-207
[2]   Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI [J].
Barredo Arrieta, Alejandro ;
Diaz-Rodriguez, Natalia ;
Del Ser, Javier ;
Bennetot, Adrien ;
Tabik, Siham ;
Barbado, Alberto ;
Garcia, Salvador ;
Gil-Lopez, Sergio ;
Molina, Daniel ;
Benjamins, Richard ;
Chatila, Raja ;
Herrera, Francisco .
INFORMATION FUSION, 2020, 58 :82-115
[3]  
Bertrand A., 2022, P 2022 AAAIACM C ETH
[4]  
Bucinca Zana, 2021, Proceedings of the ACM on Human-Computer Interaction, V5, DOI [10.1145/3449287, 10.1145/3449287]
[5]   Proxy Tasks and Subjective Measures Can Be Misleading in Evaluating Explainable AI Systems [J].
Bucinca, Zana ;
Lin, Phoebe ;
Gajos, Krzysztof Z. ;
Glassman, Elena L. .
PROCEEDINGS OF THE 25TH INTERNATIONAL CONFERENCE ON INTELLIGENT USER INTERFACES, IUI 2020, 2020, :454-464
[6]  
Cook M.B., 2007, Proceedings of Human Factors and Ergonomics, V51, P303, DOI [https://doi.org/10.1177/154193120705100433, 10.1177/154193120705100433, DOI 10.1177/154193120705100433]
[7]   Human Factors of the Confirmation Bias in Intelligence Analysis: Decision Support From Graphical Evidence Landscapes [J].
Cook, Maia B. ;
Smallman, Harvey S. .
HUMAN FACTORS, 2008, 50 (05) :745-754
[8]  
Green Ben, 2019, Proceedings of the ACM on Human-Computer Interaction, V3, DOI [10.1145/3359152, 10.1145/3359152]
[9]  
Gunning D., 2017, DEFENSE ADV RES PROJ, V2
[10]  
Guo SN, 2019, CHI 2019: PROCEEDINGS OF THE 2019 CHI CONFERENCE ON HUMAN FACTORS IN COMPUTING SYSTEMS, DOI [10.1145/3290605.3300803, 10.1109/peds44367.2019.8998889]