Using human gaze in few-shot imitation learning for robot manipulation

被引:1
作者
Hamano, Shogo [1 ]
Kim, Heecheol [1 ]
Ohmura, Yoshiyuki [1 ]
Kuniyoshi, Yasuo [1 ]
机构
[1] Univ Tokyo, Grad Sch Informat Sci & Technol, Lab Intelligent Syst & Informat, Bunkyo Ku, 7-3-1 Hongo, Tokyo, Japan
来源
2022 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS) | 2022年
关键词
Imitation Learning; Deep Learning in Grasping and Manipulation; Few-shot Learning; Meta-learning; Telerobotics and Teleoperation;
D O I
10.1109/IROS47612.2022.9981706
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Imitation learning has attracted attention as a method for realizing complex robot control without programmed robot behavior. Meta-imitation learning has been proposed to solve the high cost of data collection and low generalizability to new tasks that imitation learning suffers from. Meta-imitation can learn new tasks involving unknown objects from a small amount of data by learning multiple tasks during training. However, meta-imitation learning, especially using images, is still vulnerable to changes in the background, which occupies a large portion of the input image. This study introduces a human gaze into meta-imitation learning-based robot control. We created a model with model-agnostic meta-learning to predict the gaze position from the image by measuring the gaze with an eye tracker in the head-mounted display. Using images around the predicted gaze position as an input makes the model robust to changes in visual information. We experimentally verified the performance of the proposed method through picking tasks using a simulated robot. The results indicate that our proposed method has a greater ability than the conventional method to learn a new task from only 9 demonstrations even if the object's color or the background pattern changes between the training and test.
引用
收藏
页码:8622 / 8629
页数:8
相关论文
共 50 条
  • [31] Subspace Adaptation Prior for Few-Shot Learning
    Huisman, Mike
    Plaat, Aske
    van Rijn, Jan N.
    MACHINE LEARNING, 2024, 113 (02) : 725 - 752
  • [32] Anime Character Colorization using Few-shot Learning
    Maejima, Akinobu
    Kubo, Hiroyuki
    Shinagawa, Seitaro
    Funatomi, Takuya
    Yotsukura, Tatsuo
    Nakamura, Satoshi
    Mukaigawa, Yasuhiro
    PROCEEDINGS OF SIGGRAPH ASIA 2021 TECHNICAL COMMUNICATIONS, 2021,
  • [33] Robust Few-Shot Learning Without Using Any Adversarial Samples
    Nayak, Gaurav Kumar
    Rawal, Ruchit
    Khatri, Inder
    Chakraborty, Anirban
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2025, 36 (02) : 2080 - 2090
  • [34] Adaptive Contact-Rich Manipulation Through Few-Shot Imitation Learning With Force-Torque Feedback and Pre-Trained Object Representations
    Tsuji, Chikaha
    Coronado, Enrique
    Osorio, Pablo
    Venture, Gentiane
    IEEE ROBOTICS AND AUTOMATION LETTERS, 2025, 10 (01): : 240 - 247
  • [35] Few-shot personalized saliency prediction using meta-learning
    Luo, Xinhui
    Liu, Zhi
    Wei, Weijie
    Ye, Linwei
    Zhang, Tianhong
    Xu, Lihua
    Wang, Jijun
    IMAGE AND VISION COMPUTING, 2022, 124
  • [36] META-LEARNING WITH ATTENTION FOR IMPROVED FEW-SHOT LEARNING
    Hou, Zejiang
    Walid, Anwar
    Kung, Sun-Yuan
    2021 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP 2021), 2021, : 2725 - 2729
  • [37] Learning Adaptive Classifiers Synthesis for Generalized Few-Shot Learning
    Ye, Han-Jia
    Hu, Hexiang
    Zhan, De-Chuan
    INTERNATIONAL JOURNAL OF COMPUTER VISION, 2021, 129 (06) : 1930 - 1953
  • [38] Explore pretraining for few-shot learning
    Yan Li
    Jinjie Huang
    Multimedia Tools and Applications, 2024, 83 : 4691 - 4702
  • [39] Meta-pruning: Learning to Prune on Few-Shot Learning
    Chu, Yan
    Liu, Keshi
    Jiang, Songhao
    Sun, Xianghui
    Wang, Baoxu
    Wang, Zhengkui
    KNOWLEDGE SCIENCE, ENGINEERING AND MANAGEMENT, PT I, KSEM 2024, 2024, 14884 : 74 - 85
  • [40] A hybrid deep model with cumulative learning for few-shot learning
    Liu, Jiehao
    Yang, Zhao
    Luo, Liufei
    Luo, Mingkai
    Hu, Luyu
    Li, Jiahao
    MULTIMEDIA TOOLS AND APPLICATIONS, 2023, 82 (13) : 19901 - 19922