When to (or not to) trust intelligent machines: Insights from an evolutionary game theory analysis of trust in repeated games

被引:52
作者
Han, The Anh [1 ]
Perret, Cedric [1 ]
Powers, Simon T. [2 ]
机构
[1] Teesside Univ, Middlesbrough, Cleveland, England
[2] Edinburgh Napier Univ, Edinburgh, Midlothian, Scotland
来源
COGNITIVE SYSTEMS RESEARCH | 2021年 / 68卷
关键词
Trust; Evolutionary game theory; Intelligent agents; Cooperation; Prisoner's dilemma; Repeated games; REPEATED PRISONERS-DILEMMA; TIT-FOR-TAT; WIN-STAY; DYNAMICS; COOPERATION; INFORMATION; STRATEGY;
D O I
10.1016/j.cogsys.2021.02.003
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The actions of intelligent agents, such as chatbots, recommender systems, and virtual assistants are typically not fully transparent to the user. Consequently, users take the risk that such agents act in ways opposed to the users' preferences or goals. It is often argued that people use trust as a cognitive shortcut to reduce the complexity of such interactions. Here we formalise this by using the methods of evolutionary game theory to study the viability of trust-based strategies in repeated games. These are reciprocal strategies that cooperate as long as the other player is observed to be cooperating. Unlike classic reciprocal strategies, once mutual cooperation has been observed for a threshold number of rounds they stop checking their co-player's behaviour every round, and instead only check it with some probability. By doing so, they reduce the opportunity cost of verifying whether the action of their co-player was actually cooperative. We demonstrate that these trust-based strategies can outcompete strategies that are always conditional, such as Tit-for-Tat, when the opportunity cost is non-negligible. We argue that this cost is likely to be greater when the interaction is between people and intelligent agents, because of the reduced transparency of the agent. Consequently, we expect people to use trust-based strategies more frequently in interactions with intelligent agents. Our results provide new, important insights into the design of mechanisms for facilitating interactions between humans and intelligent agents, where trust is an essential factor. (C) 2021 Elsevier B.V. All rights reserved.
引用
收藏
页码:111 / 124
页数:14
相关论文
共 68 条
[51]   Evolution of trust and trustworthiness: social awareness favours personality differences [J].
McNamara, John M. ;
Stephens, Philip A. ;
Dall, Sasha R. X. ;
Houston, Alasdair I. .
PROCEEDINGS OF THE ROYAL SOCIETY B-BIOLOGICAL SCIENCES, 2009, 276 (1657) :605-613
[52]   Machines and mindlessness: Social responses to computers [J].
Nass, C ;
Moon, Y .
JOURNAL OF SOCIAL ISSUES, 2000, 56 (01) :81-103
[53]   A STRATEGY OF WIN STAY, LOSE SHIFT THAT OUTPERFORMS TIT-FOR-TAT IN THE PRISONERS-DILEMMA GAME [J].
NOWAK, M ;
SIGMUND, K .
NATURE, 1993, 364 (6432) :56-58
[54]   A systematic review and taxonomy of explanations in decision support and recommender systems [J].
Nunes, Ingrid ;
Jannach, Dietmar .
USER MODELING AND USER-ADAPTED INTERACTION, 2017, 27 (3-5) :393-444
[55]  
Paiva A, 2018, AAAI CONF ARTIF INTE, P7994
[56]   Trust and relationship building in electronic commerce [J].
Papadopoulou, P ;
Andreou, A ;
Kanellis, P ;
Martakos, D .
INTERNET RESEARCH, 2001, 11 (04) :322-332
[57]  
Petruzzi PE, 2014, LECT NOTES ARTIF INT, V8861, P18, DOI 10.1007/978-3-319-13191-7_2
[58]   Trust-inspiring explanation interfaces for recommender systems [J].
Pu, Pearl ;
Chen, Li .
KNOWLEDGE-BASED SYSTEMS, 2007, 20 (06) :542-556
[59]   Trust in multi-agent systems [J].
Ramchurn, SD ;
Huynh, D ;
Jennings, NR .
KNOWLEDGE ENGINEERING REVIEW, 2004, 19 (01) :1-25
[60]   Evolution of fairness in the one-shot anonymous Ultimatum Game [J].
Rand, David G. ;
Tarnita, Corina E. ;
Ohtsuki, Hisashi ;
Nowak, Martin A. .
PROCEEDINGS OF THE NATIONAL ACADEMY OF SCIENCES OF THE UNITED STATES OF AMERICA, 2013, 110 (07) :2581-2586