Action Anticipation: Reading the Intentions of Humans and Robots

被引:50
作者
Duarte, Nuno Ferreira [1 ,2 ]
Rakovic, Mirko [1 ,2 ]
Tasevski, Jovica [2 ]
Coco, Moreno Ignazio [3 ]
Billard, Aude [4 ]
Santos-Victor, Jose [1 ,2 ]
机构
[1] Univ Lisbon, Inst Super Tecn, Inst Syst & Robot, Vislab, P-1649004 Lisbon, Portugal
[2] Univ Novi Sad, Fac Tech Sci, Novi Sad 21000, Novi Sad, Serbia
[3] Univ Edinburgh, Ctr Cognit Ageing & Cognit Epidemiol, Dept Psychol, Edinburgh EH8 9JZ, Midlothian, Scotland
[4] Ecole Polytech Fed Lausanne, Sch Engn, Learning Algorithms & Syst Lab, CH-1015 Lausanne, Switzerland
来源
IEEE ROBOTICS AND AUTOMATION LETTERS | 2018年 / 3卷 / 04期
基金
欧盟地平线“2020”;
关键词
Social human-robot interaction; humanoid robots; sensor fusion; VISION;
D O I
10.1109/LRA.2018.2861569
中图分类号
TP24 [机器人技术];
学科分类号
080202 ; 1405 ;
摘要
Humans have the fascinating capacity of processing nonverbal visual cues to understand and anticipate the actions of other humans. This "intention reading" ability is underpinned by shared motor repertoires and action models, which we use to interpret the intentions of others as if they were our own. We investigate how different cues contribute to the legibility of human actions during interpersonal interactions. Our first contribution is a publicly available dataset with recordings of human body motion and eye gaze, acquired in an experimental scenario with an actor interacting with three subjects. From these data, we conducted a human study to analyze the importance of different nonverbal cues for action perception. As our second contribution, we used motion/gaze recordings to build a computational model describing the interaction between two persons. As a third contribution, we embedded this model in the controller of an iCub humanoid robot and conducted a second human study, in the same scenario with the robot as an actor, to validate the model's "intention reading" capability. Our results show that it is possible to model (nonverbal) signals exchanged by humans during interaction, and how to incorporate such a mechanism in robotic systems with the twin goal of being able to "read" human action intentionsand acting in a way that is legible by humans.
引用
收藏
页码:4132 / 4139
页数:8
相关论文
共 30 条
[1]   Deliberate Delays During Robot-to-Human Handovers Improve Compliance With Gaze Communication [J].
Admoni, Henny ;
Dragan, Anca ;
Srinivasa, Siddhartha S. ;
Scassellati, Brian .
HRI'14: PROCEEDINGS OF THE 2014 ACM/IEEE INTERNATIONAL CONFERENCE ON HUMAN-ROBOT INTERACTION, 2014, :49-56
[2]  
[Anonymous], 2013, Applied Statistics and Probability for Engineers
[3]   On learning, representing, and generalizing a task in a humanoid robot [J].
Calinon, Sylvain ;
Guenter, Florent ;
Billard, Aude .
IEEE TRANSACTIONS ON SYSTEMS MAN AND CYBERNETICS PART B-CYBERNETICS, 2007, 37 (02) :286-298
[4]  
Dragan AD, 2013, ACMIEEE INT CONF HUM, P301, DOI 10.1109/HRI.2013.6483603
[5]   Infants' online perception of give-and-take interactions [J].
Elsner, Claudia ;
Bakker, Marta ;
Rohlfing, Katharina ;
Gredeback, Gustaf .
JOURNAL OF EXPERIMENTAL CHILD PSYCHOLOGY, 2014, 126 :280-294
[6]   Goal-directed imitation for robots: A bio-inspired approach to action understanding and skill learning [J].
Erlhagen, W. ;
Mukovskiy, A. ;
Bicho, E. ;
Panin, G. ;
Kiss, C. ;
Knoll, A. ;
van Schie, H. ;
Bekkering, H. .
ROBOTICS AND AUTONOMOUS SYSTEMS, 2006, 54 (05) :353-360
[7]  
Fathi A., 2011, 2011 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), P3281, DOI 10.1109/CVPR.2011.5995444
[8]   Transforming vision into action [J].
Goodale, Melvyn A. .
VISION RESEARCH, 2011, 51 (13) :1567-1587
[9]   Good is up-spatial metaphors in action observation [J].
Gottwald, Janna M. ;
Elsner, Birgit ;
Pollatos, Olga .
FRONTIERS IN PSYCHOLOGY, 2015, 6
[10]  
Huber M, 2008, 2008 17TH IEEE INTERNATIONAL SYMPOSIUM ON ROBOT AND HUMAN INTERACTIVE COMMUNICATION, VOLS 1 AND 2, P107, DOI 10.1109/ROMAN.2008.4600651