Use of Action Label in Deep Predictive Learning for Robot Manipulation

被引:0
|
作者
Kase, Kei [1 ,2 ]
Utsumi, Chikara [1 ,2 ]
Domae, Yukiyasu [2 ]
Ogata, Tetsuya [1 ,2 ]
机构
[1] Waseda Univ, Tokyo, Japan
[2] Natl Inst Adv Ind Sci & Technol, Tokyo, Japan
来源
2022 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS) | 2022年
关键词
D O I
10.1109/IROS47612.2022.9982091
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Various forms of human knowledge can be explicitly used to enhance deep robot learning from demonstrations. Annotation of subtasks from task segmentation is one type of human symbolism and knowledge. Annotated subtasks can be referred to as action labels, which are more primitive symbols that can be building blocks for more complex human reasoning, like language instructions. However, action labels are not widely used to boost learning processes because of problems that include (1) real-time annotation for online manipulation, (2) temporal inconsistency by annotators, (3) difference in data characteristics of motor commands and action labels, and (4) annotation cost. To address these problems, we propose the Gated Action Motor Predictive Learning (GAMPL) framework to leverage action labels for improved performance. GAMPL has two modules to obtain soft action labels compatible with motor commands and to generate motion. In this study, GAMPL is evaluated for towel-folding manipulation tasks in a real environment with a six degrees-of-freedom (6 DoF) robot and shows improved generalizability with action labels.
引用
收藏
页码:13459 / 13465
页数:7
相关论文
共 50 条
  • [1] Review of Deep Reinforcement Learning for Robot Manipulation
    Hai Nguyen
    Hung Manh La
    2019 THIRD IEEE INTERNATIONAL CONFERENCE ON ROBOTIC COMPUTING (IRC 2019), 2019, : 590 - 595
  • [2] Visual Data Simulation for Deep Learning in Robot Manipulation Tasks
    Surak, Miroslav
    Kosnar, Karel
    Kulich, Miroslav
    Kozak, Viktor
    Peeucil, Libor
    MODELLING AND SIMULATION FOR AUTONOMOUS SYSTEMS (MESAS 2018), 2019, 11472 : 402 - 411
  • [3] Predictive Control of a Robot Manipulator with Deep Reinforcement Learning
    Bejar, Eduardo
    Moran, Antonio
    2021 7TH INTERNATIONAL CONFERENCE ON CONTROL, AUTOMATION AND ROBOTICS (ICCAR), 2021, : 127 - 130
  • [4] A Deep Reinforcement Learning Environment for Particle Robot Navigation and Object Manipulation
    Shen, Jeremy
    Xiao, Erdong
    Liu, Yuchen
    Feng, Chen
    2022 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION, ICRA 2022, 2022, : 6232 - 6239
  • [5] A Survey of Robot Manipulation Behavior Research Based on Deep Reinforcement Learning
    Chen J.
    Zheng M.
    Jiqiren/Robot, 2022, 44 (02): : 236 - 256
  • [6] On the Role of the Action Space in Robot Manipulation Learning and Sim-to-Real Transfer
    Aljalbout, Elie
    Frank, Felix
    Karl, Maximilian
    van der Smagt, Patrick
    IEEE ROBOTICS AND AUTOMATION LETTERS, 2024, 9 (06): : 5895 - 5902
  • [7] Active learning for robot manipulation
    Morales, A
    Chinellato, E
    Fagg, AH
    del Pobil, AP
    ECAI 2004: 16TH EUROPEAN CONFERENCE ON ARTIFICIAL INTELLIGENCE, PROCEEDINGS, 2004, 110 : 905 - 909
  • [8] Memory-based gaze prediction in deep imitation learning for robot manipulation
    Kim, Heecheol
    Ohmura, Yoshiyuki
    Kuniyoshi, Yasuo
    2022 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA 2022), 2022, : 2427 - 2433
  • [9] Deep Reinforcement Learning for the Improvement of Robot Manipulation Skills Under Sparse Reward
    He, Maochang
    Cheng, Hao
    Duan, Feng
    Sun, Zhe
    Li, Si Ning
    Yokoi, Hiroshi
    Zhu, Chi
    2022 34TH CHINESE CONTROL AND DECISION CONFERENCE, CCDC, 2022, : 5508 - 5513
  • [10] Predictive and Robust Robot Assistance for Sequential Manipulation
    Stouraitis, Theodoros
    Gienger, Michael
    IEEE ROBOTICS AND AUTOMATION LETTERS, 2023, 8 (12) : 8026 - 8033