Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation

被引:92
作者
Ishowo-Oloko, Fatimah [1 ]
Bonnefon, Jean-Francois [2 ,3 ]
Soroye, Zakariyah [1 ]
Crandall, Jacob [4 ]
Rahwan, Iyad [3 ,5 ]
Rahwan, Talal [6 ]
机构
[1] Khalifa Univ, Dept Comp Sci, Abu Dhabi, U Arab Emirates
[2] Univ Toulouse Capitole, CNRS, Toulouse Sch Econ TSM R, Toulouse, France
[3] Max Planck Inst Human Dev, Ctr Humans & Machines, Berlin, Germany
[4] Brigham Young Univ, Dept Comp Sci, Provo, UT 84602 USA
[5] MIT, Media Lab, Cambridge, MA 02139 USA
[6] New York Univ Abu Dhabi, Dept Comp Sci, Abu Dhabi, U Arab Emirates
关键词
EVOLUTION; PEOPLE;
D O I
10.1038/s42256-019-0113-5
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Recent advances in artificial intelligence and deep learning have made it possible for bots to pass as humans, as is the case with the recent Google Duplex-an automated voice assistant capable of generating realistic speech that can fool humans into thinking they are talking to another human. Such technologies have drawn sharp criticism due to their ethical implications, and have fueled a push towards transparency in human-machine interactions. Despite the legitimacy of these concerns, it remains unclear whether bots would compromise their efficiency by disclosing their true nature. Here, we conduct a behavioural experiment with participants playing a repeated prisoner's dilemma game with a human or a bot, after being given either true or false information about the nature of their associate. We find that bots do better than humans at inducing cooperation, but that disclosing their true nature negates this superior efficiency. Human participants do not recover from their prior bias against bots despite experiencing cooperative attitudes exhibited by bots over time. These results highlight the need to set standards for the efficiency cost we are willing to pay in order for machines to be transparent about their non-human nature. Algorithms and bots are capable of performing some behaviours at human or super-human levels. Humans, however, tend to trust algorithms less than they trust other humans. The authors find that bots do better than humans at inducing cooperation in certain human-machine interactions, but only if the bots do not disclose their true nature as artificial.
引用
收藏
页码:517 / +
页数:10
相关论文
共 46 条
  • [11] Citron DK, 2014, WASH LAW REV, V89, P1
  • [12] Claus C, 1998, FIFTEENTH NATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE (AAAI-98) AND TENTH CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICAL INTELLIGENCE (IAAI-98) - PROCEEDINGS, P746
  • [13] Cooperating with machines
    Crandall, Jacob W.
    Oudah, Mayada
    Tennom
    Ishowo-Oloko, Fatimah
    Abdallah, Sherief
    Bonnefon, Jean-Francois
    Cebrian, Manuel
    Shariff, Azim
    Goodrich, Michael A.
    Rahwan, Iyad
    [J]. NATURE COMMUNICATIONS, 2018, 9
  • [14] Towards Minimizing Disappointment in Repeated Games
    Crandall, Jacob W.
    [J]. JOURNAL OF ARTIFICIAL INTELLIGENCE RESEARCH, 2014, 49 : 111 - 142
  • [15] Dal Bó P, 2005, AM ECON REV, V95, P1591
  • [16] de Farias D., 2004, ADV NEURAL INFORM PR, V17, P409
  • [17] Accountability in Algorithmic Decision Making
    Diakopoulos, Nicholas
    [J]. COMMUNICATIONS OF THE ACM, 2016, 59 (02) : 56 - 62
  • [18] Algorithm Aversion: People Erroneously Avoid Algorithms After Seeing Them Err
    Dietvorst, Berkeley J.
    Simmons, Joseph P.
    Massey, Cade
    [J]. JOURNAL OF EXPERIMENTAL PSYCHOLOGY-GENERAL, 2015, 144 (01) : 114 - 126
  • [19] Multinational investigation of cross-societal cooperation
    Dorrough, Angela Rachael
    Gloeckner, Andreas
    [J]. PROCEEDINGS OF THE NATIONAL ACADEMY OF SCIENCES OF THE UNITED STATES OF AMERICA, 2016, 113 (39) : 10836 - 10841
  • [20] Winners don't punish
    Dreber, Anna
    Rand, David G.
    Fudenberg, Drew
    Nowak, Martin A.
    [J]. NATURE, 2008, 452 (7185) : 348 - 351