Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation

被引:92
作者
Ishowo-Oloko, Fatimah [1 ]
Bonnefon, Jean-Francois [2 ,3 ]
Soroye, Zakariyah [1 ]
Crandall, Jacob [4 ]
Rahwan, Iyad [3 ,5 ]
Rahwan, Talal [6 ]
机构
[1] Khalifa Univ, Dept Comp Sci, Abu Dhabi, U Arab Emirates
[2] Univ Toulouse Capitole, CNRS, Toulouse Sch Econ TSM R, Toulouse, France
[3] Max Planck Inst Human Dev, Ctr Humans & Machines, Berlin, Germany
[4] Brigham Young Univ, Dept Comp Sci, Provo, UT 84602 USA
[5] MIT, Media Lab, Cambridge, MA 02139 USA
[6] New York Univ Abu Dhabi, Dept Comp Sci, Abu Dhabi, U Arab Emirates
关键词
EVOLUTION; PEOPLE;
D O I
10.1038/s42256-019-0113-5
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Recent advances in artificial intelligence and deep learning have made it possible for bots to pass as humans, as is the case with the recent Google Duplex-an automated voice assistant capable of generating realistic speech that can fool humans into thinking they are talking to another human. Such technologies have drawn sharp criticism due to their ethical implications, and have fueled a push towards transparency in human-machine interactions. Despite the legitimacy of these concerns, it remains unclear whether bots would compromise their efficiency by disclosing their true nature. Here, we conduct a behavioural experiment with participants playing a repeated prisoner's dilemma game with a human or a bot, after being given either true or false information about the nature of their associate. We find that bots do better than humans at inducing cooperation, but that disclosing their true nature negates this superior efficiency. Human participants do not recover from their prior bias against bots despite experiencing cooperative attitudes exhibited by bots over time. These results highlight the need to set standards for the efficiency cost we are willing to pay in order for machines to be transparent about their non-human nature. Algorithms and bots are capable of performing some behaviours at human or super-human levels. Humans, however, tend to trust algorithms less than they trust other humans. The authors find that bots do better than humans at inducing cooperation in certain human-machine interactions, but only if the bots do not disclose their true nature as artificial.
引用
收藏
页码:517 / +
页数:10
相关论文
共 46 条