Natural Grasp Intention Recognition Based on Gaze in Human-Robot Interaction

被引:12
作者
Yang, Bo [1 ]
Huang, Jian [1 ]
Chen, Xinxing [2 ,3 ]
Li, Xiaolong [1 ]
Hasegawa, Yasuhisa [4 ]
机构
[1] Huazhong Univ Sci & Technol, Sch Artificial Intelligence & Automation, Key Lab Image Proc & Intelligent Control, Wuhan 430074, Peoples R China
[2] Shenzhen Key Lab Biomimet Robot & Intelligent Syst, Shenzhen 518055, Peoples R China
[3] Rehabil Robot Univ, Southern Univ Sci & Technol, Guangdong Prov Key Lab Human Augmentat, Shenzhen 518055, Peoples R China
[4] Nagoya Univ, Dept Micronano Mech Sci & Engn, Furo cho Chikusa ku, Nagoya 4648603, Japan
基金
中国国家自然科学基金;
关键词
Grasp intention recognition; gaze movement modeling; human-robot interaction; feature extraction; EYE-MOVEMENTS; PREDICTION; VISION;
D O I
10.1109/JBHI.2023.3238406
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Objective: While neuroscience research has established a link between vision and intention, studies on gaze data features for intention recognition are absent. The majority of existing gaze-based intention recognition approaches are based on deliberate long-term fixation and suffer from insufficient accuracy. In order to address the lack of features and insufficient accuracy in previous studies, the primary objective of this study is to suppress noise from human gaze data and extract useful features for recognizing grasp intention. Methods: We conduct gaze movement evaluation experiments to investigate the characteristics of gaze motion. The target-attracted gaze movement model (TAGMM) is proposed as a quantitative description of gaze movement based on the findings. A Kalman filter (KF) is used to reduce the noise in the gaze data based on TAGMM. We conduct gaze-based natural grasp intention recognition evaluation experiments to collect the subject's gaze data. Four types of features describing gaze point dispersion (f(var)), gaze point movement (f(gm)), head movement (f(hm)), and distance from the gaze points to objects (f(dj)) are then proposed to recognize the subject's grasp intentions. With the proposed features, we perform intention recognition experiments, employing various classifiers, and the results are compared with different methods. Results: The statistical analysis reveals that the proposed features differ significantly across intentions, offering the possibility of employing these features to recognize grasp intentions. We demonstrated the intention recognition performance utilizing the TAGMM and the proposed features in within-subject and cross-subject experiments. The results indicate that the proposed method can recognize the intention with accuracy improvements of 44.26% (within-subject) and 30.67% (cross-subject) over the fixation-based method. The proposed method also consumes less time (34.87 ms) to recognize the intention than the fixation-based method (about 1 s). Conclusion: This work introduces a novel TAGMM for modeling gaze movement and a variety of practical features for recognizing grasp intentions. Experiments confirm the effectiveness of our approach. Significance: The proposed TAGMM is capable of modeling gaze movements and can be utilized to process gaze data, and the proposed features can reveal the user's intentions. These results contribute to the development of gaze-based human-robot interaction.
引用
收藏
页码:2059 / 2070
页数:12
相关论文
共 50 条
  • [31] User feedback in human-robot interaction: Prosody, gaze and timing
    Skantze, Gabriel
    Oertel, Catharine
    Hjalmarsson, Anna
    14TH ANNUAL CONFERENCE OF THE INTERNATIONAL SPEECH COMMUNICATION ASSOCIATION (INTERSPEECH 2013), VOLS 1-5, 2013, : 1900 - 1904
  • [32] A General Approach to Natural Human-Robot Interaction
    Sabattini, Lorenzo
    Villani, Valeria
    Secchi, Cristian
    Fantuzzi, Cesare
    HUMAN FRIENDLY ROBOTICS, 2019, 7 : 61 - 71
  • [33] Hand posture recognition in gesture-based human-robot interaction
    Yin, Xiaoming
    Zhu, Xing
    ICIEA 2006: 1ST IEEE CONFERENCE ON INDUSTRIAL ELECTRONICS AND APPLICATIONS, VOLS 1-3, PROCEEDINGS, 2006, : 397 - 402
  • [34] Hand posture recognition in gesture-based human-robot interaction
    Yin, Xiaoming
    Zhu, Xing
    2006 1ST IEEE CONFERENCE ON INDUSTRIAL ELECTRONICS AND APPLICATIONS, VOLS 1-3, 2006, : 835 - +
  • [35] Neural Control for Human-Robot Interaction with Human Motion Intention Estimation
    Peng, Guangzhu
    Yang, Chenguang
    Chen, C. L. Philip
    IEEE TRANSACTIONS ON INDUSTRIAL ELECTRONICS, 2024, 71 (12) : 16317 - 16326
  • [36] Accelerometer-based Hand Gesture Recognition for Human-Robot Interaction
    Anderez, Dario Ortega
    Dos Santos, Luis Pedro
    Lotfi, Ahmad
    Yahaya, Salisu Wada
    2019 IEEE SYMPOSIUM SERIES ON COMPUTATIONAL INTELLIGENCE (IEEE SSCI 2019), 2019, : 1402 - 1406
  • [37] A Vision-based Gesture Recognition System for Human-Robot Interaction
    Zhang, Jianjie
    Zhao, Mingguo
    2009 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND BIOMIMETICS (ROBIO 2009), VOLS 1-4, 2009, : 2096 - 2101
  • [38] Human-robot collaborative interaction with human perception and action recognition
    Yu, Xinyi
    Zhang, Xin
    Xu, Chengjun
    Ou, Linlin
    NEUROCOMPUTING, 2024, 563
  • [39] Visual recognition of pointing gestures for human-robot interaction
    Nickel, Kai
    Stiefelhagen, Rainer
    IMAGE AND VISION COMPUTING, 2007, 25 (12) : 1875 - 1884
  • [40] Action Alignment from Gaze Cues in Human-Human and Human-Robot Interaction
    Duarte, Nuno Ferreira
    Rakovic, Mirko
    Marques, Jorge
    Santos-Victor, Jose
    COMPUTER VISION - ECCV 2018 WORKSHOPS, PT III, 2019, 11131 : 197 - 212