A Task-Adaptive Deep Reinforcement Learning Framework for Dual-Arm Robot Manipulation

被引:4
作者
Cui, Yuanzhe [1 ]
Xu, Zhipeng [1 ]
Zhong, Lou [1 ]
Xu, Pengjie [1 ]
Shen, Yichao [2 ,3 ]
Tang, Qirong [1 ]
机构
[1] Tongji Univ, Sch Mech Engn, Lab Robot & Multibody Syst, Shanghai 201804, Peoples R China
[2] Tongji Univ, Sch Mech Engn, Lab Robot & Multibody Syst, Shanghai 201804, Peoples R China
[3] Univ Stuttgart, Inst Engn & Computat Mech, D-70569 Stuttgart, Germany
基金
中国国家自然科学基金;
关键词
Robots; Manipulators; Task analysis; Planning; Reinforcement learning; Aerospace electronics; Service robots; Dual-arm robot manipulation; deep reinforcement learning; MOTION;
D O I
10.1109/TASE.2024.3352584
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Closed-chain manipulation occurs when several robot arms perform tasks in cooperation. It is complex to control a dual-arm system because it requires flexible and adaptable operation ability to realize closed-chain manipulation. In this study, a deep reinforcement learning (DRL) framework based on actor-critic algorithm is proposed to drive the closed-chain manipulation of a dual-arm robotic system. The proposed framework is designed to train dual robot arms to transport a large object cooperatively. In order to sustain strict constraints of closed-chain manipulation, the actor part of the proposed framework is designed in a leader-follower mode. The leader part consists of a policy trained from the DRL algorithm and works on the leader arm. The follower part consists of an inverse kinematics solver based on Damped Least Squares (DLS) and works on the follower arm. Two experiments are designed to prove the task adaptability, one of which is manipulating an object to a random pose within a defined range, the other is manipulating a delicate structural object within a narrow space.
引用
收藏
页码:466 / 479
页数:14
相关论文
共 42 条
  • [1] A framework for implementing cooperative motion on industrial controllers
    Braun, BM
    Starr, GP
    Wood, JE
    Lumia, R
    [J]. IEEE TRANSACTIONS ON ROBOTICS AND AUTOMATION, 2004, 20 (03): : 583 - 589
  • [2] Reinforcement learning with prior policy guidance for motion planning of dual-arm free-floating space robot
    Cao, Yuxue
    Wang, Shengjie
    Zheng, Xiang
    Ma, Wenke
    Xie, Xinru
    Liu, Lei
    [J]. AEROSPACE SCIENCE AND TECHNOLOGY, 2023, 136
  • [3] Survey of imitation learning for robotic manipulation
    Fang, Bin
    Jia, Shidong
    Guo, Di
    Xu, Muhua
    Wen, Shuhuan
    Sun, Fuchun
    [J]. INTERNATIONAL JOURNAL OF INTELLIGENT ROBOTICS AND APPLICATIONS, 2019, 3 (04) : 362 - 369
  • [4] Interactive Imitation Learning of Bimanual Movement Primitives
    Franzese, Giovanni
    Rosa, Leandro de Souza
    Verburg, Tim
    Peternel, Luka
    Kober, Jens
    [J]. IEEE-ASME TRANSACTIONS ON MECHATRONICS, 2024, 29 (05) : 4006 - 4018
  • [5] Fujimoto S, 2018, PR MACH LEARN RES, V80
  • [6] Goyal Anirudh, 2022, PR MACH LEARN RES
  • [7] Haarnoja T, 2017, PR MACH LEARN RES, V70
  • [8] Haarnoja T, 2018, PR MACH LEARN RES, V80
  • [9] Motion Planning for Closed-Chain Constraints Based on Probabilistic Roadmap With Improved Connectivity
    Jang, Keunwoo
    Baek, Jiyeong
    Park, Suhan
    Park, Jaeheung
    [J]. IEEE-ASME TRANSACTIONS ON MECHATRONICS, 2022, 27 (04) : 2035 - 2043
  • [10] Mastering the Complex Assembly Task With a Dual-Arm Robot Based on Deep Reinforcement Learning: A Novel Reinforcement Learning Method
    Jiang, Daqi
    Wang, Hong
    Lu, Yanzheng
    [J]. IEEE ROBOTICS & AUTOMATION MAGAZINE, 2023, 30 (02) : 57 - 66