Using Goal-Conditioned Reinforcement Learning With Deep Imitation to Control Robot Arm in Flexible Flat Cable Assembly Task

被引:6
作者
Li, Jingchen [1 ]
Shi, Haobin [1 ]
Hwang, Kao-Shing [2 ,3 ]
机构
[1] Northwestern Polytech Univ, Sch Comp Sci, Xian 710072, Peoples R China
[2] Natl Sun Yat Sen Univ, Dept Elect Engn, Kaohsiung 81164, Taiwan
[3] Kaohsiung Med Univ, Dept Healthcare Adm & Med Informat, Kaohsiung 80708, Taiwan
基金
中国国家自然科学基金;
关键词
Robots; Manipulators; Reinforcement learning; Task analysis; Connectors; Service robots; Production; Deep reinforcement learning; robot arm; intelligent assembly;
D O I
10.1109/TASE.2023.3323307
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Leveraging reinforcement learning on high-precision decision-making in Robot Arm assembly scenes is a desired goal in the industrial community. However, tasks like Flexible Flat Cable (FFC) assembly, which require highly trained workers, pose significant challenges due to sparse rewards and limited learning conditions. In this work, we propose a goal-conditioned self-imitation reinforcement learning method for FFC assembly without relying on a specific end-effector, where both perception and behavior plannings are learned through reinforcement learning. We analyze the challenges faced by Robot Arm in high-precision assembly scenarios and balance the breadth and depth of exploration during training. Our end-to-end model consists of hindsight and self-imitation modules, allowing the Robot Arm to leverage futile exploration and optimize successful trajectories. Our method does not require rule-based or manual rewards, and it enables the Robot Arm to quickly find feasible solutions through experience relabeling, while unnecessary explorations are avoided. We train the FFC assembly policy in a simulation environment and transfer it to the real scenario by using domain adaptation. We explore various combinations of hindsight and self-imitation learning, and discuss the results comprehensively. Experimental findings demonstrate that our model achieves fast and advanced flexible flat cable assembly, surpassing other reinforcement learning-based methods.Note to Practitioners-The motivation of this article stems from the need to develop an efficient and accurate FFC assembly policy for 3C (Computer, Communication, and Consumer Electronic) industry, promoting the development of intelligent manufacturing. Traditional control methods are incompetent to complete such a high-precision task with Robot Arm due to the difficult-to-model connectors, and existing reinforcement learning methods cannot converge with restricted epochs because of the difficult goals or trajectories. To quickly learn a high-quality assembly for Robot Arm and accelerate the convergence speed, we combine the goal-conditioned reinforcement learning and self-imitation mechanism, balancing the depth and breadth of exploration. The proposal takes visual information and six-dimensions force as state, obtaining satisfactory assembly policies. We build a simulation scene by the Pybullet platform and pre-train the Robot Arm on it, and then the pre-trained policies can be reused in real scenarios with finetuning.
引用
收藏
页码:6217 / 6228
页数:12
相关论文
共 61 条
[11]   A Locally-Adaptive, Parallel-Jaw Gripper with Clamping and Rolling Capable, Soft Fingertips for Fine Manipulation of Flexible Flat Cables [J].
Chapman, Jayden ;
Gorjup, Gal ;
Dwivedi, Anany ;
Matsunaga, Saori ;
Mariyama, Toshisada ;
MacDonald, Bruce ;
Liarokapis, Minas .
2021 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA 2021), 2021, :6941-6947
[12]   A Fuzzy Curiosity-Driven Mechanism for Multi-Agent Reinforcement Learning [J].
Chen, Wenbai ;
Shi, Haobin ;
Li, Jingchen ;
Hwang, Kao-Shing .
INTERNATIONAL JOURNAL OF FUZZY SYSTEMS, 2021, 23 (05) :1222-1233
[13]  
Crnjac M., 2017, INT IND ENG MANAG, V8, P21, DOI DOI 10.24867/IJIEM-2017-1-103
[14]   Autonomous assembly planning of demonstrated skills with reinforcement learning in simulation [J].
De Winter, Joris ;
El Makrini, Ilias ;
Van de Perre, Greet ;
Nowe, Ann ;
Verstraten, Tom ;
Vanderborght, Bram .
AUTONOMOUS ROBOTS, 2021, 45 (08) :1097-1110
[15]  
Deisenroth M.P., 2011, Found. Trends Robot., V2, P1
[16]  
Fedus W., 2020, INT C MACHINE LEARNI, P3061, DOI DOI 10.48550/ARXIV.2007.06700
[17]   An Introduction to Deep Reinforcement Learning [J].
Francois-Lavet, Vincent ;
Henderson, Peter ;
Islam, Riashat ;
Bellemare, Marc G. ;
Pineau, Joelle .
FOUNDATIONS AND TRENDS IN MACHINE LEARNING, 2018, 11 (3-4) :219-354
[18]  
Fujimoto S, 2018, PR MACH LEARN RES, V80
[19]  
Fujisaki A., 2002, Furukawa Review, P87
[20]  
Fukuda Yutaka, 2015, SEI Technical Review, P17