Particle swarm optimization based multi-task parallel reinforcement learning algorithm

被引:4
作者
Duan Junhua [1 ]
Zhu Yi-an [1 ]
Zhong Dong [1 ]
Zhang Lixiang [1 ]
Zhang Lin [1 ]
机构
[1] Northwestern Polytech Univ, Sch Comp, 127 West Youyi Rd, Xian 710072, Shaanxi, Peoples R China
关键词
Multi-task reinforcement learning; parallel reinforcement learning; particle swarm optimization; transfer learning;
D O I
10.3233/JIFS-190209
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Transfer learning has been identified as conducive to improving the speed of machine learning in many areas. In multi-task reinforcement learning, transfer learning can assist the transfer of experiences between different tasks. The research conducted in this article is focused on two aspects. On the one hand, multi-task parallel transfer learning can improve the learning speed of parallel learning tasks. On the other hand, the learning of the current optimal experience can help the target point rewards to be transmitted to the starting point. The value of this self-learning can also accelerate the convergence speed of the reinforcement learning. According to the research into these two aspects, this paper uses the idea of particle swarm optimization (PSO) to conduct self-learning and interactive learning in multi-task parallel learning. In this paper, a new multi-task learning algorithm named PSO-MTPRL (Multi-Task Parallel Reinforcement Learning based on PSO) is proposed. Based on the idea of PSO algorithm, the Boltzmann strategy, Self-Learning Process (SLP) and Interactive Learning Process (ILP) are selected probabilistically. Based on the characteristic exhibited by reinforcement learning, segmented learning model is recommended. In the early learning stages, the complete Boltzmann exploration strategy is applied, and B-SLP-ILP (Boltzmann-SLP- ILP) learning procedure is conducted exclusively in the middle stage of the learning. In the late learning stages, Boltzmann exploration is involved again. The segmented learning model can help ensure the balance of the exploration and exploitation, in addition to ensuring that all tasks convergence.
引用
收藏
页码:8567 / 8575
页数:9
相关论文
共 25 条
[1]  
Adam B, 2016, P IEEE INT C EL TECH, P668
[2]  
[Anonymous], 2013, IRAN NAT RESOUR RES
[3]  
[Anonymous], ARTIFICIAL INTELLIGE
[4]  
Behzad Ghazanfari, 2017, 170904579 ARXIV
[5]  
Bellando F., 2017, IEDM, P1
[6]  
Ding WH, 2018, 2018 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND BIOMIMETICS (ROBIO), P237, DOI 10.1109/ROBIO.2018.8664803
[7]  
Fang K, 2018, IEEE INT CONF ROBOT, P3516, DOI 10.1109/ICRA.2018.8461041
[8]   Learning domain structure through probabilistic policy reuse in reinforcement learning [J].
Fernandez, Fernando ;
Veloso, Manuela .
PROGRESS IN ARTIFICIAL INTELLIGENCE, 2013, 2 (01) :13-27
[9]  
Glatt R, 2016, PROCEEDINGS OF 2016 5TH BRAZILIAN CONFERENCE ON INTELLIGENT SYSTEMS (BRACIS 2016), P91, DOI [10.1109/BRACIS.2016.027, 10.1109/BRACIS.2016.17]
[10]  
Gupta I.K., 2017, P 2017 IEEE 4 INT C, P50