Control strategy of robotic manipulator based on multi-task reinforcement learning

被引:0
|
作者
Wang, Tao [1 ,2 ]
Ruan, Ziming [1 ,2 ]
Wang, Yuyan [1 ]
Chen, Chong [1 ]
机构
[1] Guangdong Univ Technol, Guangdong Prov Key Lab Cyber Phys Syst, Guangzhou 510006, Peoples R China
[2] Guangdong Univ Technol, Sch Automat, Guangzhou 510006, Peoples R China
基金
中国国家自然科学基金;
关键词
Reinforcement learning; Multi-task learning; Meta-world; Robotic manipulation task;
D O I
10.1007/s40747-025-01816-w
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Multi-task learning is important in reinforcement learning where simultaneously training across different tasks allows for leveraging shared information among them, typically leading to better performance than single-task learning. While joint training of multiple tasks permits parameter sharing between tasks, the optimization challenge becomes crucial-identifying which parameters should be reused and managing potential gradient conflicts arising from different tasks. To tackle this issue, instead of uniform parameter sharing, we propose an adjudicate reconfiguration network model, which we integrate into the Soft Actor-Critic (SAC) algorithm to address the optimization problems brought about by parameter sharing in multi-task reinforcement learning algorithms. The decision reconstruction network model is designed to achieve cross-network layer information exchange between network layers by dynamically adjusting and reconfiguring the network hierarchy, which can overcome the inherent limitations of traditional network architecture in handling multitasking scenarios. The SAC algorithm based on the decision reconstruction network model can achieve simultaneous training in multiple tasks, effectively learning and integrating relevant knowledge of each task. Finally, the proposed algorithm is evaluated in a multi-task environment of the Meta-World, a benchmark for multi-task reinforcement learning containing robotic manipulation tasks, and the multi-task MUJOCO environment.
引用
收藏
页数:14
相关论文
共 50 条
  • [1] Multi-task coalition parallel formation strategy based on reinforcement learning
    Department of Computer and Information Science, Hefei University of Technology, Hefei 230009, China
    不详
    Zidonghua Xuebao, 2008, 3 (349-352):
  • [2] Bidding strategy evolution analysis based on multi-task inverse reinforcement learning
    Tang, Qinghu
    Guo, Hongye
    Chen, Qixin
    ELECTRIC POWER SYSTEMS RESEARCH, 2022, 212
  • [3] Bidding strategy evolution analysis based on multi-task inverse reinforcement learning
    Tang, Qinghu
    Guo, Hongye
    Chen, Qixin
    Electric Power Systems Research, 2022, 212
  • [4] A Decision Control Method for Autonomous Driving Based on Multi-Task Reinforcement Learning
    Cai, Yingfeng
    Yang, Shaoqing
    Wang, Hai
    Teng, Chenglong
    Chen, Long
    IEEE ACCESS, 2021, 9 (09): : 154553 - 154562
  • [5] Multi-task reinforcement learning in humans
    Momchil S. Tomov
    Eric Schulz
    Samuel J. Gershman
    Nature Human Behaviour, 2021, 5 : 764 - 773
  • [6] Sparse Multi-Task Reinforcement Learning
    Calandriello, Daniele
    Lazaric, Alessandro
    Restelli, Marcello
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 27 (NIPS 2014), 2014, 27
  • [7] Multi-task reinforcement learning in humans
    Tomov, Momchil S.
    Schulz, Eric
    Gershman, Samuel J.
    NATURE HUMAN BEHAVIOUR, 2021, 5 (06) : 764 - +
  • [8] Multi-Task Reinforcement Learning for Quadrotors
    Xing, Jiaxu
    Geles, Ismail
    Song, Yunlong
    Aljalbout, Elie
    Scaramuzza, Davide
    IEEE ROBOTICS AND AUTOMATION LETTERS, 2025, 10 (03): : 2112 - 2119
  • [9] Sparse multi-task reinforcement learning
    Calandriello, Daniele
    Lazaric, Alessandro
    Restelli, Marcello
    INTELLIGENZA ARTIFICIALE, 2015, 9 (01) : 5 - 20
  • [10] Multi-Task Deep Reinforcement Learning for Continuous Action Control
    Yang, Zhaoyang
    Merrick, Kathryn
    Abbass, Hussein
    Jin, Lianwen
    PROCEEDINGS OF THE TWENTY-SIXTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2017, : 3301 - 3307