Extended residual learning with one-shot imitation learning for robotic assembly in semi-structured environment

被引:0
|
作者
Wang, Chuang [1 ]
Su, Chupeng [1 ]
Sun, Baozheng [1 ]
Chen, Gang [1 ]
Xie, Longhan [1 ]
机构
[1] South China Univ Technol, Shien Ming Wu Sch Intelligent Engn, Guangzhou, Peoples R China
来源
FRONTIERS IN NEUROROBOTICS | 2024年 / 18卷
基金
中国国家自然科学基金;
关键词
object-embodiment-centric task representation; residual reinforcement learning; imitation learning; robotic assembly; semi-structured environment; RICH MANIPULATION TASKS; ALGORITHM;
D O I
10.3389/fnbot.2024.1355170
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Introduction Robotic assembly tasks require precise manipulation and coordination, often necessitating advanced learning techniques to achieve efficient and effective performance. While residual reinforcement learning with a base policy has shown promise in this domain, existing base policy approaches often rely on hand-designed full-state features and policies or extensive demonstrations, limiting their applicability in semi-structured environments.Methods In this study, we propose an innovative Object-Embodiment-Centric Imitation and Residual Reinforcement Learning (OEC-IRRL) approach that leverages an object-embodiment-centric (OEC) task representation to integrate vision models with imitation and residual learning. By utilizing a single demonstration and minimizing interactions with the environment, our method aims to enhance learning efficiency and effectiveness. The proposed method involves three key steps: creating an object-embodiment-centric task representation, employing imitation learning for a base policy using via-point movement primitives for generalization to different settings, and utilizing residual RL for uncertainty-aware policy refinement during the assembly phase.Results Through a series of comprehensive experiments, we investigate the impact of the OEC task representation on base and residual policy learning and demonstrate the effectiveness of the method in semi-structured environments. Our results indicate that the approach, requiring only a single demonstration and less than 1.2 h of interaction, improves success rates by 46% and reduces assembly time by 25%.Discussion This research presents a promising avenue for robotic assembly tasks, providing a viable solution without the need for specialized expertise or custom fixtures.
引用
收藏
页数:14
相关论文
共 16 条
  • [1] One-Shot Domain-Adaptive Imitation Learning via Progressive Learning Applied to Robotic Pouring
    Zhang, Dandan
    Fan, Wen
    Lloyd, John
    Yang, Chenguang
    Lepora, Nathan F. F.
    IEEE TRANSACTIONS ON AUTOMATION SCIENCE AND ENGINEERING, 2024, 21 (01) : 541 - 554
  • [2] One-shot Imitation Learning via Interaction Warping
    Biza, Ondrej
    Thompson, Skye
    Pagidi, Kishore Reddy
    Kumar, Abhinav
    van der Pol, Elise
    Walters, Robin
    Kipf, Thomas
    van de Meent, Jan-Willem
    Wong, Lawson L. S.
    Platt, Robert
    CONFERENCE ON ROBOT LEARNING, VOL 229, 2023, 229
  • [3] Hierarchical Learning Approach for One-shot Action Imitation in Humanoid Robots
    Wu, Yan
    Demiris, Yiannis
    11TH INTERNATIONAL CONFERENCE ON CONTROL, AUTOMATION, ROBOTICS AND VISION (ICARCV 2010), 2010, : 453 - 458
  • [4] Deep Adversarial Imitation Learning of Locomotion Skills from One-shot Video Demonstration
    Zhang, Huiwen
    Liu, Yuwang
    Zhou, Weijia
    2019 9TH IEEE ANNUAL INTERNATIONAL CONFERENCE ON CYBER TECHNOLOGY IN AUTOMATION, CONTROL, AND INTELLIGENT SYSTEMS (IEEE-CYBER 2019), 2019, : 1257 - 1261
  • [5] One-shot sim-to-real transfer policy for robotic assembly via reinforcement learning with visual demonstration
    Xiao, Ruihong
    Yang, Chenguang
    Jiang, Yiming
    Zhang, Hui
    ROBOTICA, 2024, 42 (04) : 1074 - 1093
  • [6] One-Shot Imitation Learning With Graph Neural Networks for Pick-and-Place Manipulation Tasks
    Di Felice, Francesco
    D'Avella, Salvatore
    Remus, Alberto
    Tripicchio, Paolo
    Avizzano, Carlo Alberto
    IEEE ROBOTICS AND AUTOMATION LETTERS, 2023, 8 (09) : 5926 - 5933
  • [7] A residual reinforcement learning method for robotic assembly using visual and force information
    Zhang, Zhuangzhuang
    Wang, Yizhao
    Zhang, Zhinan
    Wang, Lihui
    Huang, Huang
    Cao, Qixin
    JOURNAL OF MANUFACTURING SYSTEMS, 2024, 72 : 245 - 262
  • [8] Multimodal Task Attention Residual Reinforcement Learning: Advancing Robotic Assembly in Unstructured Environment
    Lin, Ze
    Wang, Chuang
    Wu, Sihan
    Xie, Longhan
    IEEE ROBOTICS AND AUTOMATION LETTERS, 2025, 10 (04): : 3900 - 3907
  • [9] DFL-TORO: A One-Shot Demonstration Framework for Learning Time-Optimal Robotic Manufacturing Tasks
    Barekatain, Alireza
    Habibi, Hamed
    Voos, Holger
    IEEE ACCESS, 2024, 12 : 161164 - 161184
  • [10] A Reinforcement One-Shot Active Learning Approach for Aircraft Type Recognition
    Huang, Honglan
    Feng, Yanghe
    Huang, Jincai
    Zhang, Jiarui
    Chen, Li
    IEEE ACCESS, 2019, 7 : 147204 - 147214