Predicting Human Perceptions of Robot Performance during Navigation Tasks

被引:0
作者
Zhang, Qiping [1 ]
Tsoi, Nathan [1 ]
Nagib, Mofeed [1 ]
Choi, Booyeon [1 ]
Tan, Jie [2 ]
Chiang, Hao-tien lewis [2 ]
Vazquez, Marynel [1 ]
机构
[1] Yale Univ, New Haven, CT 06520 USA
[2] Google Inc, Google DeepMind, Mountain View, CA USA
基金
美国国家科学基金会;
关键词
implicit human feedback; human-robot interaction; social robot navigation; virtual reality; SIMULATORS; BEHAVIOR; SYSTEM;
D O I
10.1145/3719020; 10.1145/3719020
中图分类号
TP24 [机器人技术];
学科分类号
080202 ; 1405 ;
摘要
Understanding human perceptions of robot performance is crucial for designing socially intelligent robots that can adapt to human expectations. Current approaches often rely on surveys, which can disrupt ongoing human-robot interactions. As an alternative, we explore predicting people's perceptions of robot performance using non-verbal behavioral cues and machine learning techniques. We contribute the SEAN TOGETHER Dataset consisting of observations of an interaction between a person and a mobile robot in Virtual Reality, together with perceptions of robot performance provided by users on a 5-point scale. We then analyze how well humans and supervised learning techniques can predict perceived robot performance based on different observation types (like facial expression and spatial behavior features). Our results suggest that facial expressions alone provide useful information, but in the navigation scenarios that we considered, reasoning about spatial features in context is critical for the prediction task. Also, supervised learning techniques outperformed humans' predictions in most cases. Further, when predicting robot performance as a binary classification task on unseen users' data, the F1-Score of machine learning models more than doubled that of predictions on a 5-point scale. This suggested good generalization capabilities, particularly in identifying performance directionality over exact ratings. Based on these findings, we conducted a real-world demonstration where a mobile robot uses a machine learning model to predict how a human who follows it perceives it. Finally, we discuss the implications of our results for implementing these supervised learning models in real-world navigation. Our work paves the path to automatically enhancing robot behavior based on observations of users and inferences about their perceptions of a robot.
引用
收藏
页数:583
相关论文
共 86 条
[1]   Do you feel safe with your robot? Factors influencing perceived safety in human-robot interaction based on subjective and objective measures [J].
Akalin, Neziha ;
Kristoffersson, Annica ;
Loutfi, Amy .
INTERNATIONAL JOURNAL OF HUMAN-COMPUTER STUDIES, 2022, 158
[2]  
Anderson P, 2020, PR MACH LEARN RES, V155, P671
[3]   You Are In My Way: Non-verbal Social Cues for Legible Robot Navigation Behaviors [J].
Angelopoulos, Georgios ;
Rossi, Alessandra ;
Di Napoli, Claudia ;
Rossi, Silvia .
2022 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2022, :657-662
[4]  
Aronson Reuben M., 2018, P RSS 18 FRAM JOINT
[5]   Personal Space Violation by a Robot: An Application of Expectation Violation Theory in Human-Robot Interaction [J].
Asavanant, Chatchalita ;
Umemuro, Hiroyuki .
2021 30TH IEEE INTERNATIONAL CONFERENCE ON ROBOT AND HUMAN INTERACTIVE COMMUNICATION (RO-MAN), 2021, :1181-1188
[6]  
Avrunin E, 2014, IEEE ROMAN, P1037, DOI 10.1109/ROMAN.2014.6926389
[7]  
Bai Y., arXiv, DOI [10.48550/arXiv.2204.05862, DOI 10.48550/ARXIV.2204.05862]
[8]  
Bartneck C., 2020, Human-Robot Interaction: An Introduction
[9]  
Bera Aniket, 2019, COMPUTER VISION PATT, P21
[10]  
Bharadhwaj H, 2019, IEEE INT CONF ROBOT, P782, DOI [10.1109/ICRA.2019.8794310, 10.1109/icra.2019.8794310]