It's Complicated: The Relationship between User Trust, Model Accuracy and Explanations in AI

被引:58
作者
Papenmeier, Andrea [1 ]
Kern, Dagmar [1 ]
Englebienne, Gwenn [2 ]
Seifert, Christin [2 ,3 ]
机构
[1] GESIS Leibniz Inst Social Sci, Unter Sachsenhausen 6-8, D-50667 Cologne, Germany
[2] Univ Twente, Drienerlolaan 5, NL-7522 NB Enschede, Netherlands
[3] Univ Duisburg Essen, Girardetstr 2, D-45131 Essen, Germany
关键词
Explainable AI; machine learning; minimum explanations; user trust; explanation fidelity; AUTOMATION;
D O I
10.1145/3495013
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Automated decision-making systems become increasingly powerful due to higher model complexity. While powerful in prediction accuracy, Deep Learning models are black boxes by nature, preventing users from making informed judgments about the correctness and fairness of such an automated system. Explanations have been proposed as a general remedy to the black box problem. However, it remains unclear if effects of explanations on user trust generalise over varying accuracy levels. In an online user study with 959 participants, we examined the practical consequences of adding explanations for user trust: We evaluated trust for three explanation types on three classifiers of varying accuracy. We find that the influence of our explanations on trust differs depending on the classifier's accuracy. Thus, the interplay between trust and explanations is more complex than previously reported. Our findings also reveal discrepancies between self-reported and behavioural trust, showing that the choice of trust measure impacts the results.
引用
收藏
页数:33
相关论文
共 55 条
[1]  
Altares P. S., 2003, Elementary statistics: a modern approach
[2]   "What is relevant in a text document?": An interpretable machine learning approach [J].
Arras, Leila ;
Horn, Franziska ;
Montavon, Gregoire ;
Mueller, Klaus-Robert ;
Samek, Wojciech .
PLOS ONE, 2017, 12 (08)
[3]   Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI [J].
Barredo Arrieta, Alejandro ;
Diaz-Rodriguez, Natalia ;
Del Ser, Javier ;
Bennetot, Adrien ;
Tabik, Siham ;
Barbado, Alberto ;
Garcia, Salvador ;
Gil-Lopez, Sergio ;
Molina, Daniel ;
Benjamins, Richard ;
Chatila, Raja ;
Herrera, Francisco .
INFORMATION FUSION, 2020, 58 :82-115
[4]  
Biran Or, 2017, P IJCAI 17 WORKSHOP
[5]   Proxy Tasks and Subjective Measures Can Be Misleading in Evaluating Explainable AI Systems [J].
Bucinca, Zana ;
Lin, Phoebe ;
Gajos, Krzysztof Z. ;
Glassman, Elena L. .
PROCEEDINGS OF THE 25TH INTERNATIONAL CONFERENCE ON INTELLIGENT USER INTERFACES, IUI 2020, 2020, :454-464
[6]   The Role of Explanations on Trust and Reliance in Clinical Decision Support Systems [J].
Bussone, Adrian ;
Stumpf, Simone ;
O'Sullivan, Dympna .
2015 IEEE INTERNATIONAL CONFERENCE ON HEALTHCARE INFORMATICS (ICHI 2015), 2015, :160-169
[7]  
Chen Jianbo, 2018, P MACHINE LEARNING R, V80
[8]   Explaining Decision-Making Algorithms through UI: Strategies to Help Non-Expert Stakeholders [J].
Cheng, Hao-Fei ;
Wang, Ruotong ;
Zhang, Zheng ;
O'Connell, Fiona ;
Gray, Terrance ;
Harper, F. Maxwell ;
Zhu, Haiyi .
CHI 2019: PROCEEDINGS OF THE 2019 CHI CONFERENCE ON HUMAN FACTORS IN COMPUTING SYSTEMS, 2019,
[9]   The effects of transparency on trust in and acceptance of a content-based art recommender [J].
Cramer, Henriette ;
Evers, Vanessa ;
Ramlal, Satyan ;
van Someren, Maarten ;
Rutledge, Lloyd ;
Stash, Natalia ;
Aroyo, Lora ;
Wielinga, Bob .
USER MODELING AND USER-ADAPTED INTERACTION, 2008, 18 (05) :455-496
[10]  
Davidson T., 2017, P 11 INT C WEB SOC M, P512