Towards Natural and Intuitive Human-Robot Collaboration based on Goal-Oriented Human Gaze Intention Recognition

被引:0
作者
Lim, Taeyhang [1 ]
Lee, Joosun [2 ]
Kim, Wansoo [3 ]
机构
[1] Hanyang Univ, Dept Interdisciplinary Robot Engn Syst, Seoul, South Korea
[2] Hanyang Univ, Dept Mech Engn, Seoul, South Korea
[3] Hanyang Univ, Dept Robot Engn, ERICA, Seoul, South Korea
来源
2023 SEVENTH IEEE INTERNATIONAL CONFERENCE ON ROBOTIC COMPUTING, IRC 2023 | 2023年
关键词
Human-Robot Interaction; Intention Recognition; Augmented Reality; Service Robotics;
D O I
10.1109/IRC59093.2023.00027
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
The objective of this paper is to introduce a new method for predicting human gaze intention using a head-mounted display, with the aim of enabling natural and intuitive collaboration between humans and robots. Human eye gaze is strongly linked to cognitive processes and can facilitate communication between humans and robots. However, accurately identifying the goal-directed object through human intention remains challenging. This study focuses on developing a method to differentiate between goal and non-goal gaze by creating an area of interest (AOI) on each object through the goal-directed gaze. The Microsoft HoloLens 2 was used to simulate the robot using real-time gaze data in augmented reality (AR). The methods with and without AOI were compared through pick-and-place robot manipulation through human gaze prediction. The AOI method resulted a maximum improvement of 19% in the F1 score compared to the baseline method. The results yield strong evidence on intuitiveness and usefulness that the use of pre-defined AOI allows improved performance to predict gaze intention that has the potential to be applied in various fields, where human-robot collaboration can enhance efficiency and productivity.
引用
收藏
页码:115 / 120
页数:6
相关论文
共 23 条
  • [1] Progress and prospects of the human-robot collaboration
    Ajoudani, Arash
    Zanchettin, Andrea Maria
    Ivaldi, Serena
    Albu-Schaeffer, Alin
    Kosuge, Kazuhiro
    Khatib, Oussama
    [J]. AUTONOMOUS ROBOTS, 2018, 42 (05) : 957 - 975
  • [2] [Anonymous], 2022, Microsoft HoloLens 2
  • [3] [Anonymous], 1972, Non-verbal communication
  • [4] Goal-oriented gaze strategies afforded by object interaction
    Belardinelli, Anna
    Herbort, Oliver
    Butz, Martin V.
    [J]. VISION RESEARCH, 2015, 106 : 47 - 57
  • [5] Gleeson B, 2013, ACMIEEE INT CONF HUM, P349, DOI 10.1109/HRI.2013.6483609
  • [6] Using gaze patterns to predict task intent in collaboration
    Huang, Chien-Ming
    Andrist, Sean
    Sauppe, Allison
    Mutlu, Bilge
    [J]. FRONTIERS IN PSYCHOLOGY, 2015, 6
  • [7] ARETT: Augmented Reality Eye Tracking Toolkit for Head Mounted Displays
    Kapp, Sebastian
    Barz, Michael
    Mukhametov, Sergey
    Sonntag, Daniel
    Kuhn, Jochen
    [J]. SENSORS, 2021, 21 (06)
  • [8] The knowledge base of the oculomotor system
    Land, MF
    Furneaux, S
    [J]. PHILOSOPHICAL TRANSACTIONS OF THE ROYAL SOCIETY OF LONDON SERIES B-BIOLOGICAL SCIENCES, 1997, 352 (1358) : 1231 - 1239
  • [9] Lee J, 2023, Arxiv, DOI arXiv:2306.13072
  • [10] 3-D-Gaze-Based Robotic Grasping Through Mimicking Human Visuomotor Function for People With Motion Impairments
    Li, Songpo
    Zhang, Xiaoli
    Webb, Jeremy D.
    [J]. IEEE TRANSACTIONS ON BIOMEDICAL ENGINEERING, 2017, 64 (12) : 2824 - 2835