Trust and reliance on AI - An experimental study on the extent and costs of overreliance on AI

被引:15
作者
Klingbeil, Artur [1 ]
Gruetzner, Cassandra [1 ]
Schreck, Philipp [1 ]
机构
[1] Martin Luther Univ Halle Wittenberg, Wittenberg, Germany
关键词
Human-computer interaction; Behavioral experiment; Reliance behavior; Trust attitude; Overreliance; Algorithm appreciation; ALGORITHM AVERSION; AUTOMATION; PEOPLE;
D O I
10.1016/j.chb.2024.108352
中图分类号
B84 [心理学];
学科分类号
04 ; 0402 ;
摘要
Decision-making is undergoing rapid changes due to the introduction of artificial intelligence (AI), as AI recommender systems can help mitigate human flaws and increase decision accuracy and efficiency. However, AI can also commit errors or suffer from algorithmic bias. Hence, blind trust in technologies carries risks, as users may follow detrimental advice resulting in undesired consequences. Building upon research on algorithm appreciation and trust in AI, the current study investigates whether users who receive AI advice in an uncertain situation overrely on this advice - to their own detriment and that of other parties. In a domain-independent, incentivized, and interactive behavioral experiment, we find that the mere knowledge of advice being generated by an AI causes people to overrely on it, that is, to follow AI advice even when it contradicts available contextual information as well as their own assessment. Frequently, this overreliance leads not only to inefficient outcomes for the advisee, but also to undesired effects regarding third parties. The results call into question how AI is being used in assisted decision making, emphasizing the importance of AI literacy and effective trust calibration for productive deployment of such systems.
引用
收藏
页数:10
相关论文
共 50 条
[1]  
Agarwal A, 2018, 35 INT C MACHINE LEA, V80
[2]   Why trust an algorithm? Performance, cognition, and neurophysiology [J].
Alexander, Veronika ;
Blinder, Collin ;
Zak, Paul J. .
COMPUTERS IN HUMAN BEHAVIOR, 2018, 89 :279-288
[3]  
Andreoni J., 2008, HDB EXPT EC RESULTS, P776
[4]   In AI we trust? Perceptions about automated decision-making by artificial intelligence [J].
Araujo, Theo ;
Helberger, Natali ;
Kruikemeier, Sanne ;
de Vreese, Claes H. .
AI & SOCIETY, 2020, 35 (03) :611-623
[5]   TRUST, RECIPROCITY, AND SOCIAL-HISTORY [J].
BERG, J ;
DICKHAUT, J ;
MCCABE, K .
GAMES AND ECONOMIC BEHAVIOR, 1995, 10 (01) :122-142
[6]   People are averse to machines making moral decisions [J].
Bigman, Yochanan E. ;
Gray, Kurt .
COGNITION, 2018, 181 :21-34
[7]   Bias in algorithmic filtering and personalization [J].
Bozdag, Engin .
ETHICS AND INFORMATION TECHNOLOGY, 2013, 15 (03) :209-227
[8]   To Trust or to Think: Cognitive Forcing Functions Can Reduce Overreliance on AI in AI-assisted Decision-making [J].
Buçinca Z. ;
Malaya M.B. ;
Gajos K.Z. .
Proceedings of the ACM on Human-Computer Interaction, 2021, 5 (CSCW1)
[9]   Rise of the machines: Delegating decisions to autonomous AI [J].
Candrian, Cindy ;
Scherer, Anne .
COMPUTERS IN HUMAN BEHAVIOR, 2022, 134
[10]   Task-Dependent Algorithm Aversion [J].
Castelo, Noah ;
Bos, Maarten W. ;
Lehmann, Donald R. .
JOURNAL OF MARKETING RESEARCH, 2019, 56 (05) :809-825