Using human gaze in few-shot imitation learning for robot manipulation

被引:1
作者
Hamano, Shogo [1 ]
Kim, Heecheol [1 ]
Ohmura, Yoshiyuki [1 ]
Kuniyoshi, Yasuo [1 ]
机构
[1] Univ Tokyo, Grad Sch Informat Sci & Technol, Lab Intelligent Syst & Informat, Bunkyo Ku, 7-3-1 Hongo, Tokyo, Japan
来源
2022 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS) | 2022年
关键词
Imitation Learning; Deep Learning in Grasping and Manipulation; Few-shot Learning; Meta-learning; Telerobotics and Teleoperation;
D O I
10.1109/IROS47612.2022.9981706
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Imitation learning has attracted attention as a method for realizing complex robot control without programmed robot behavior. Meta-imitation learning has been proposed to solve the high cost of data collection and low generalizability to new tasks that imitation learning suffers from. Meta-imitation can learn new tasks involving unknown objects from a small amount of data by learning multiple tasks during training. However, meta-imitation learning, especially using images, is still vulnerable to changes in the background, which occupies a large portion of the input image. This study introduces a human gaze into meta-imitation learning-based robot control. We created a model with model-agnostic meta-learning to predict the gaze position from the image by measuring the gaze with an eye tracker in the head-mounted display. Using images around the predicted gaze position as an input makes the model robust to changes in visual information. We experimentally verified the performance of the proposed method through picking tasks using a simulated robot. The results indicate that our proposed method has a greater ability than the conventional method to learn a new task from only 9 demonstrations even if the object's color or the background pattern changes between the training and test.
引用
收藏
页码:8622 / 8629
页数:8
相关论文
共 50 条
  • [41] Federated Few-Shot Learning with Adversarial Learning
    Fan, Chenyou
    Huang, Jianwei
    2021 19TH INTERNATIONAL SYMPOSIUM ON MODELING AND OPTIMIZATION IN MOBILE, AD HOC, AND WIRELESS NETWORKS (WIOPT), 2021,
  • [42] Active Few-Shot Learning with FASL
    Muller, Thomas
    Perez-Torro, Guillermo
    Basile, Angelo
    Franco-Salvador, Marc
    NATURAL LANGUAGE PROCESSING AND INFORMATION SYSTEMS (NLDB 2022), 2022, 13286 : 98 - 110
  • [43] Few-shot Learning with Prompting Methods
    2023 6TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION AND IMAGE ANALYSIS, IPRIA, 2023,
  • [44] Few-shot Learning for Human Activity Recognition Based on CSI
    Huang, Sipeng
    Chen, Yang
    Wu, Dingchao
    Yu, Guangwei
    Zhang, Yong
    2022 ASIA CONFERENCE ON ALGORITHMS, COMPUTING AND MACHINE LEARNING (CACML 2022), 2022, : 403 - 409
  • [45] Review of few-shot learning application in CSI human sensing
    Wang, Zhengjie
    Li, Jianhang
    Wang, Wenchao
    Dong, Zhaolei
    Zhang, Qingwei
    Guo, Yinjing
    ARTIFICIAL INTELLIGENCE REVIEW, 2024, 57 (08)
  • [46] Few-shot learning-based human activity recognition
    Feng, Siwei
    Duarte, Marco F.
    EXPERT SYSTEMS WITH APPLICATIONS, 2019, 138
  • [47] A hybrid deep model with cumulative learning for few-shot learning
    Jiehao Liu
    Zhao Yang
    Liufei Luo
    Mingkai Luo
    Luyu Hu
    Jiahao Li
    Multimedia Tools and Applications, 2023, 82 : 19901 - 19922
  • [48] Few-Shot Learning for Image Denoising
    Jiang, Bo
    Lu, Yao
    Zhang, Bob
    Lu, Guangming
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2023, 33 (09) : 4741 - 4753
  • [49] Few-Shot Learning with Novelty Detection
    Bjerge, Kim
    Bodesheim, Paul
    Karstoft, Henrik
    DEEP LEARNING THEORY AND APPLICATIONS, PT I, DELTA 2024, 2024, 2171 : 340 - 363
  • [50] Explore pretraining for few-shot learning
    Li, Yan
    Huang, Jinjie
    MULTIMEDIA TOOLS AND APPLICATIONS, 2023, 83 (2) : 4691 - 4702