When Would You Trust a Robot? A Study on Trust and Theory of Mind in Human-Robot Interactions

被引:0
作者
Mou, Wenxuan [1 ]
Ruocco, Martina [1 ]
Zanatto, Debora [2 ]
Cangelosi, Angelo [1 ]
机构
[1] Univ Manchester, Dept Comp Sci, Manchester, Lancs, England
[2] Univ Bristol, Dept Comp Sci, Bristol, Avon, England
来源
2020 29TH IEEE INTERNATIONAL CONFERENCE ON ROBOT AND HUMAN INTERACTIVE COMMUNICATION (RO-MAN) | 2020年
基金
英国工程与自然科学研究理事会;
关键词
COGNITIVE ARCHITECTURE; ANTHROPOMORPHISM; METAANALYSIS; CREDIBILITY; LIKABILITY;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Trust is a critical issue in human robot interactions (HRI) as it is the core of human desire to accept and use a non-human agent. Theory of Mind (ToM) has been defined as the ability to understand the beliefs and intentions of others that may differ from one's own. Evidences in psychology and HRI suggest that trust and ToM are interconnected and interdependent concepts, as the decision to trust another agent must depend on our own representation of this entity's actions, beliefs and intentions. However, very few works take ToM of the robot into consideration while studying trust in HRI. In this paper, we investigated whether the exposure to the ToM abilities of a robot could affect humans' trust towards the robot. To this end, participants played a Price Game with a humanoid robot (Pepper) that was presented having either low-level ToM or high-level ToM. Specifically, the participants were asked to accept the price evaluations on common objects presented by the robot. The willingness of the participants to change their own price judgement of the objects (i.e., accept the price the robot suggested) was used as the main measurement of the trust towards the robot. Our experimental results showed that robots possessing a high-level of ToM abilities were trusted more than the robots presented with low-level ToM skills.
引用
收藏
页码:956 / 962
页数:7
相关论文
共 34 条
[1]   Social signs processing in a cognitive architecture for an humanoid robot. [J].
Augello, Agnese ;
Cipolla, Emanuele ;
Infantino, Ignazio ;
Manfre, Adriano ;
Pilato, Giovanni ;
Vella, Filippo .
8TH ANNUAL INTERNATIONAL CONFERENCE ON BIOLOGICALLY INSPIRED COGNITIVE ARCHITECTURES, BICA 2017 (EIGHTH ANNUAL MEETING OF THE BICA SOCIETY), 2018, 123 :63-68
[2]   DOES THE AUTISTIC-CHILD HAVE A THEORY OF MIND [J].
BARONCOHEN, S ;
LESLIE, AM ;
FRITH, U .
COGNITION, 1985, 21 (01) :37-46
[3]   Measurement Instruments for the Anthropomorphism, Animacy, Likeability, Perceived Intelligence, and Perceived Safety of Robots [J].
Bartneck, Christoph ;
Kulic, Dana ;
Croft, Elizabeth ;
Zoghbi, Susana .
INTERNATIONAL JOURNAL OF SOCIAL ROBOTICS, 2009, 1 (01) :71-81
[4]   Multilevel Darwinist Brain (MDB): Artificial Evolution in a Cognitive Architecture for Real Robots [J].
Bellas, Francisco ;
Duro, Richard J. ;
Faina, Andres ;
Souto, Daniel .
IEEE TRANSACTIONS ON AUTONOMOUS MENTAL DEVELOPMENT, 2010, 2 (04) :340-354
[5]   Simulation-Based Internal Models for Safer Robots [J].
Blum, Christian ;
Winfield, Alan F. T. ;
Hefner, Verena V. .
FRONTIERS IN ROBOTICS AND AI, 2018, 4
[6]   Effects of nonverbal communication on efficiency and robustness in human-robot teamwork [J].
Breazeal, C ;
Kidd, CD ;
Thomaz, AL ;
Hoffman, G ;
Berlin, M .
2005 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS, VOLS 1-4, 2005, :383-388
[7]   A cognitive framework for imitation learning [J].
Chella, A. ;
Dindo, H. ;
Infantino, I. .
ROBOTICS AND AUTONOMOUS SYSTEMS, 2006, 54 (05) :403-408
[8]  
Desai M, 2012, ACMIEEE INT CONF HUM, P73
[9]   Statistical power analyses using G*Power 3.1: Tests for correlation and regression analyses [J].
Faul, Franz ;
Erdfelder, Edgar ;
Buchner, Axel ;
Lang, Albert-Georg .
BEHAVIOR RESEARCH METHODS, 2009, 41 (04) :1149-1160
[10]  
Ham Jaap, 2011, Social Robotics. Proceedings Third International Conference (ICSR 2011), P71, DOI 10.1007/978-3-642-25504-5_8