Use of Action Label in Deep Predictive Learning for Robot Manipulation

被引:0
|
作者
Kase, Kei [1 ,2 ]
Utsumi, Chikara [1 ,2 ]
Domae, Yukiyasu [2 ]
Ogata, Tetsuya [1 ,2 ]
机构
[1] Waseda Univ, Tokyo, Japan
[2] Natl Inst Adv Ind Sci & Technol, Tokyo, Japan
来源
2022 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS) | 2022年
关键词
D O I
10.1109/IROS47612.2022.9982091
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Various forms of human knowledge can be explicitly used to enhance deep robot learning from demonstrations. Annotation of subtasks from task segmentation is one type of human symbolism and knowledge. Annotated subtasks can be referred to as action labels, which are more primitive symbols that can be building blocks for more complex human reasoning, like language instructions. However, action labels are not widely used to boost learning processes because of problems that include (1) real-time annotation for online manipulation, (2) temporal inconsistency by annotators, (3) difference in data characteristics of motor commands and action labels, and (4) annotation cost. To address these problems, we propose the Gated Action Motor Predictive Learning (GAMPL) framework to leverage action labels for improved performance. GAMPL has two modules to obtain soft action labels compatible with motor commands and to generate motion. In this study, GAMPL is evaluated for towel-folding manipulation tasks in a real environment with a six degrees-of-freedom (6 DoF) robot and shows improved generalizability with action labels.
引用
收藏
页码:13459 / 13465
页数:7
相关论文
共 50 条
  • [31] Robot Manipulation Learning Using Generative Adversarial Imitation Learning
    Jabri, Mohamed Khalil
    PROCEEDINGS OF THE THIRTIETH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, IJCAI 2021, 2021, : 4893 - 4894
  • [32] Deep Learning-Based Ensemble Approach for Autonomous Object Manipulation with an Anthropomorphic Soft Robot Hand
    Anazco, Edwin Valarezo
    Guerrero, Sara
    Lopez, Patricio Rivera
    Oh, Ji-Heon
    Ryu, Ga-Hyeon
    Kim, Tae-Seong
    ELECTRONICS, 2024, 13 (02)
  • [33] Robot multi-action cooperative grasping strategy based on deep reinforcement learning
    He, Huiteng
    Zhou, Yong
    Hu, Kaixiong
    Li, Weidong
    Jisuanji Jicheng Zhizao Xitong/Computer Integrated Manufacturing Systems, CIMS, 2024, 30 (05): : 1789 - 1797
  • [34] Autonomous object detection and grasping using deep learning for design of an intelligent assistive robot manipulation system
    Rakhimkul, Sanzhar
    Kim, Anton
    Pazylbekov, Askarbek
    Shintemirov, Almas
    Conference Proceedings - IEEE International Conference on Systems, Man and Cybernetics, 2019, 2019-October : 3962 - 3968
  • [35] Autonomous Object Detection and Grasping Using Deep Learning for Design of an Intelligent Assistive Robot Manipulation System
    Rakhimkul, Sanzhar
    Kim, Anton
    Pazylbekov, Askarbek
    Shintemirov, Almas
    2019 IEEE INTERNATIONAL CONFERENCE ON SYSTEMS, MAN AND CYBERNETICS (SMC), 2019, : 3962 - 3968
  • [36] A sample efficient model-based deep reinforcement learning algorithm with experience replay for robot manipulation
    Zhang, Cheng
    Ma, Liang
    Schmitz, Alexander
    INTERNATIONAL JOURNAL OF INTELLIGENT ROBOTICS AND APPLICATIONS, 2020, 4 (02) : 217 - 228
  • [37] A sample efficient model-based deep reinforcement learning algorithm with experience replay for robot manipulation
    Cheng Zhang
    Liang Ma
    Alexander Schmitz
    International Journal of Intelligent Robotics and Applications, 2020, 4 : 217 - 228
  • [38] Proxemics-based deep reinforcement learning for robot navigation in continuous action space
    Cimurs R.
    Suh I.-H.
    Journal of Institute of Control, Robotics and Systems, 2020, 26 (03) : 168 - 176
  • [39] Learning Mobile Manipulation through Deep Reinforcement Learning
    Wang, Cong
    Zhang, Qifeng
    Tian, Qiyan
    Li, Shuo
    Wang, Xiaohui
    Lane, David
    Petillot, Yvan
    Wang, Sen
    SENSORS, 2020, 20 (03)
  • [40] Contact-Rich Manipulation of a Flexible Object based on Deep Predictive Learning using Vision and Tactility
    Ichiwara, Hideyuki
    Ito, Hiroshi
    Yamamoto, Kenjiro
    Mori, Hiroki
    Ogata, Tetsuya
    2022 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA 2022), 2022, : 5375 - 5381