Action Alignment from Gaze Cues in Human-Human and Human-Robot Interaction

被引:5
作者
Duarte, Nuno Ferreira [1 ]
Rakovic, Mirko [1 ,2 ]
Marques, Jorge [1 ]
Santos-Victor, Jose [1 ]
机构
[1] Univ Lisbon, Inst Syst & Robot, Inst Super Tecn, Vislab, Lisbon, Portugal
[2] Univ Novi Sad, Fac Tech Sci, Novi Sad, Serbia
来源
COMPUTER VISION - ECCV 2018 WORKSHOPS, PT III | 2019年 / 11131卷
基金
欧盟地平线“2020”;
关键词
Action anticipation; Gaze behavior; Action alignment; Human-robot interaction;
D O I
10.1007/978-3-030-11015-4_17
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Cognitive neuroscience experiments show how people intensify the exchange of non-verbal cues when they work on a joint task towards a common goal. When individuals share their intentions, it creates a social interaction that drives the mutual alignment of their actions and behavior. To understand the intentions of others, we strongly rely on the gaze cues. According to the role each person plays in the interaction, the resulting alignment of the body and gaze movements will be different. This mechanism is key to understand andmodel dyadic social interactions. We focus on the alignment of the leader's behavior during dyadic interactions. The recorded gaze movements of dyads are used to build a model of the leader's gaze behavior. We use of the follower's gaze behavior data for two purposes: (i) to determine whether the follower is involved in the interaction, and (ii) if the follower's gaze behavior correlates to the type of the action under execution. This information is then used to plan the leader's actions in order to sustain the leader/follower alignment in the social interaction. The model of the leader's gaze behavior and the alignment of the intentions is evaluated in a human-robot interaction scenario, with the robot acting as a leader and the human as a follower. During the interaction, the robot (i) emits non-verbal cues consistent with the action performed; (ii) predicts the human actions, and (iii) aligns its motion according to the human behavior.
引用
收藏
页码:197 / 212
页数:16
相关论文
共 29 条
[1]   Deliberate Delays During Robot-to-Human Handovers Improve Compliance With Gaze Communication [J].
Admoni, Henny ;
Dragan, Anca ;
Srinivasa, Siddhartha S. ;
Scassellati, Brian .
HRI'14: PROCEEDINGS OF THE 2014 ACM/IEEE INTERNATIONAL CONFERENCE ON HUMAN-ROBOT INTERACTION, 2014, :49-56
[2]   Looking Coordinated: Bidirectional Gaze Mechanisms for Collaborative Interaction with Virtual Characters [J].
Andrist, Sean ;
Gleicher, Michael ;
Mutlu, Bilge .
PROCEEDINGS OF THE 2017 ACM SIGCHI CONFERENCE ON HUMAN FACTORS IN COMPUTING SYSTEMS (CHI'17), 2017, :2571-2582
[3]  
[Anonymous], 2018, ARXIV180400892
[4]  
Bassetti C, 2017, COMPUT VIS PATT REC, P15, DOI 10.1016/B978-0-12-809276-7.00003-5
[5]  
Biagini F., 2016, ELEMENTS PROBABILITY, P81, DOI [10.1007/978-3-319-07254-8, DOI 10.1007/978-3-319-07254-8_6]
[6]  
Domhof J, 2015, IEEE INT C INT ROBOT, P2406, DOI 10.1109/IROS.2015.7353703
[7]   Action Anticipation: Reading the Intentions of Humans and Robots [J].
Duarte, Nuno Ferreira ;
Rakovic, Mirko ;
Tasevski, Jovica ;
Coco, Moreno Ignazio ;
Billard, Aude ;
Santos-Victor, Jose .
IEEE ROBOTICS AND AUTOMATION LETTERS, 2018, 3 (04) :4132-4139
[8]   Gaze-based interaction: A 30 year retrospective [J].
Duchowski, Andrew T. .
COMPUTERS & GRAPHICS-UK, 2018, 73 :59-69
[9]  
Fathi A., 2011, 2011 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), P3281, DOI 10.1109/CVPR.2011.5995444
[10]   Alignment in social interactions [J].
Gallotti, M. ;
Fairhurst, M. T. ;
Frith, C. D. .
CONSCIOUSNESS AND COGNITION, 2017, 48 :253-261