State-Dependent Maximum Entropy Reinforcement Learning for Robot Long-Horizon Task Learning

被引:1
作者
Zheng, Deshuai [1 ]
Yan, Jin [1 ]
Xue, Tao [1 ]
Liu, Yong [1 ]
机构
[1] Nanjing Univ Sci & Technol, Sch Comp Sci & Engn, Nanjing 210000, Peoples R China
基金
中国国家自然科学基金;
关键词
Long-horizon task; Robot learning; Reinforcement learning;
D O I
10.1007/s10846-024-02049-8
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Task-oriented robot learning has shown significant potential with the development of Reinforcement Learning (RL) algorithms. However, the learning of long-horizon tasks for robots remains a formidable challenge due to the inherent complexity of tasks, typically comprising multiple diverse stages. Universal RL algorithms commonly encounter issues such as slow convergence or even failure to converge altogether when applied to such tasks. The reasons behind these challenges lie in the local optima trap or redundant exploration during the new stages or the junction of two continuous stages. To address these challenges, we propose a novel state-dependent maximum entropy (SDME) reinforcement learning algorithm. This algorithm effectively balances the trade-off between exploration and exploitation around three kinds of critical states arising from the unique nature of long-horizon tasks. We conducted experiments within an open-source simulation environment, focusing on two representative long-horizon tasks. The proposed SDME algorithm exhibits faster and more stable learning capabilities, requiring merely one-third of the number of learning samples necessary for baseline approaches. Furthermore, we assess the generalization ability of our method under randomly initialized conditions, and the results show that the success rate of the SDME algorithm is nearly twice that of the baselines. Our code will be available at https://github.com/Peter-zds/SDME.
引用
收藏
页数:14
相关论文
共 34 条
[1]  
Abed-alguni BH., 2018, INT J ARTIF INTELL, V16, P41
[2]   Pay attention! - Robustifying a Deep Visuomotor Policy through Task-Focused Visual Attention [J].
Abolghasemi, Pooya ;
Mazaheri, Amir ;
Shah, Mubarak ;
Boloni, Ladislau .
2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, :4249-4257
[3]   Learning to Dress: Synthesizing Human Dressing Motion via Deep Reinforcement Learning [J].
Clegg, Alexander ;
Yu, Wenhao ;
Tan, Jie ;
Liu, C. Karen ;
Turk, Greg .
ACM TRANSACTIONS ON GRAPHICS, 2018, 37 (06)
[4]  
Djordjevic V., 2023, Data-driven control of hydraulic servo actuator: An event-triggered adaptive dynamic programming approach
[5]  
Fang YX, 2021, ADV NEUR IN
[6]  
Haarnoja T, 2018, PR MACH LEARN RES, V80
[7]  
Ho J, 2016, ADV NEUR IN, V29
[8]  
Hochreiter S, 1997, NEURAL COMPUT, V9, P1735, DOI [10.1162/neco.1997.9.8.1735, 10.1007/978-3-642-24797-2, 10.1162/neco.1997.9.1.1]
[9]  
Huang D., 2019, CVPR, P8565
[10]   "Good Robot!": Efficient Reinforcement Learning for Multi-Step Visual Tasks with Sim to Real Transfer [J].
Hundt, Andrew ;
Killeen, Benjamin ;
Greene, Nicholas ;
Wu, Hongtao ;
Kwon, Heeyeon ;
Paxton, Chris ;
Hager, Gregory D. .
IEEE ROBOTICS AND AUTOMATION LETTERS, 2020, 5 (04) :6724-6731