Transparency and trust in artificial intelligence systems

被引:174
作者
Schmidt, Philipp [1 ]
Biessmann, Felix [1 ,2 ,3 ]
Teubner, Timm [3 ,4 ]
机构
[1] Amazon Res, Berlin, Germany
[2] Beuth Univ Appl Sci, Informat & Medien, Berlin, Germany
[3] Einstein Ctr Digital Future ECDF, Berlin, Germany
[4] TU Berlin, Inst Technol & Management, Berlin, Germany
关键词
Artificial intelligence; trust; experiment; machine learning; XAI; transparency; USER ACCEPTANCE; TECHNOLOGY; ALGORITHMS; MODEL;
D O I
10.1080/12460125.2020.1819094
中图分类号
C93 [管理学]; O22 [运筹学];
学科分类号
070105 ; 12 ; 1201 ; 1202 ; 120202 ;
摘要
Assistive technology featuring artificial intelligence (AI) to support human decision-making has become ubiquitous. Assistive AI achieves accuracy comparable to or even surpassing that of human experts. However, often the adoption of assistive AI systems is limited by a lack of trust of humans into an AI's prediction. This is why the AI research community has been focusing on rendering AI decisions more transparent by providing explanations of an AIs decision. To what extent these explanations really help to foster trust into an AI system remains an open question. In this paper, we report the results of a behavioural experiment in which subjects were able to draw on the support of an ML-based decision support tool for text classification. We experimentally varied the information subjects received and show that transparency can actually have a negative impact on trust. We discuss implications for decision makers employing assistive AI technology.
引用
收藏
页码:260 / 278
页数:19
相关论文
共 53 条
[1]   How trust can drive forward the user acceptance to the technology? In-vehicle technology for autonomous vehicle [J].
Adnan, Nadia ;
Nordin, Shahrina Md ;
bin Bahruddin, Mohamad Ariff ;
Ali, Murad .
TRANSPORTATION RESEARCH PART A-POLICY AND PRACTICE, 2018, 118 :819-836
[2]   The theory of planned behaviour is alive and well, and not ready to retire: a commentary on Sniehotta, Presseau, and Araujo-Soares [J].
Ajzen, Icek .
HEALTH PSYCHOLOGY REVIEW, 2015, 9 (02) :131-137
[3]  
Alber M., 2018, WORKING PAPER
[4]  
[Anonymous], 2017, IEEEAAIA DIGIT AVION
[6]   Overcoming Algorithm Aversion: People Will Use Imperfect Algorithms If They Can (Even Slightly) Modify Them [J].
Dietvorst, Berkeley J. ;
Simmons, Joseph P. ;
Massey, Cade .
MANAGEMENT SCIENCE, 2018, 64 (03) :1155-1170
[7]   Algorithm Aversion: People Erroneously Avoid Algorithms After Seeing Them Err [J].
Dietvorst, Berkeley J. ;
Simmons, Joseph P. ;
Massey, Cade .
JOURNAL OF EXPERIMENTAL PSYCHOLOGY-GENERAL, 2015, 144 (01) :114-126
[8]   The role of trust in automation reliance [J].
Dzindolet, MT ;
Peterson, SA ;
Pomranky, RA ;
Pierce, LG ;
Beck, HP .
INTERNATIONAL JOURNAL OF HUMAN-COMPUTER STUDIES, 2003, 58 (06) :697-718
[9]   Dermatologist-level classification of skin cancer with deep neural networks [J].
Esteva, Andre ;
Kuprel, Brett ;
Novoa, Roberto A. ;
Ko, Justin ;
Swetter, Susan M. ;
Blau, Helen M. ;
Thrun, Sebastian .
NATURE, 2017, 542 (7639) :115-+
[10]   Trust and TAM in online shopping: An integrated model [J].
Gefen, D ;
Karahanna, E ;
Straub, DW .
MIS QUARTERLY, 2003, 27 (01) :51-90