Path Planning for Mobile Robots Using Transfer Reinforcement Learning

被引:2
作者
Zheng, Xinwang [1 ]
Zheng, Wenjie [2 ]
Du, Yong [2 ]
Li, Tiejun [2 ]
Yuan, Zhansheng [2 ]
机构
[1] Jimei Univ, Chengyi Coll, Xiamen 361021, Fujian, Peoples R China
[2] Jimei Univ, Sch Ocean Informat Engn, Xiamen 361021, Fujian, Peoples R China
关键词
Deep reinforcement transfer learning; heterogeneous environment; path planning; robot navigation; NAVIGATION; STRATEGIES;
D O I
10.1142/S0218213024400050
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The path planning of mobile robots helps robots to perceive environment using the information obtained from sensors and plan a route to reach the target. With the increasing difficulty of task, the environment the mobile robots face becomes more and more complex. Traditional path planning methods can no longer meet the requirements of mobile robot navigation in complex environment. Deep reinforcement learning (DRL) is introduced into robot navigation However, it may be time-consuming to train DRL model when the environment is very complex and the existing environment may differ from the unknown environment. In order to handle the robot navigation in heterogeneous environment, this paper utilizes deep transfer reinforcement learning (DTRL) for mobile robot path planning. Compared with DRL, DTRL does not require the distribution of the existing environment is the same as that of the unknown environment. Additionally, DTRL can transfer the knowledge of existing model to new scenario to reduce the training time. The simulations show that DTRL can reach higher success rate than DRL for heterogeneous environment robot navigation. By using local policy, it costs less time to train DTRL than DRL for a complex environment and DTRL can consume less navigation time.
引用
收藏
页数:15
相关论文
共 48 条
[1]  
[Anonymous], 2005, P 18 INT C NEURAL IN
[2]  
Apt K.R., 2022, EDSGER WYBE DIJKSTRA, P287, DOI [10.1145/3544585.3544600, DOI 10.1145/3544585.3544600]
[3]   Path Planning of Autonomous Mobile Robot in Comprehensive Unknown Environment Using Deep Reinforcement Learning [J].
Bai, Zekun ;
Pang, Hui ;
He, Zhaonian ;
Zhao, Bin ;
Wang, Tong .
IEEE INTERNET OF THINGS JOURNAL, 2024, 11 (12) :22153-22166
[4]   Design control and management of intelligent and autonomous nanorobots with artificial intelligence for Prevention and monitoring of blood related diseases [J].
Balusamy, Balamurugan ;
Dhanaraj, Rajesh Kumar ;
Seetharaman, Tamizharasi ;
Sharma, Vandana ;
Shankar, Achyut ;
Viriyasitavat, Wattana .
ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2024, 131
[5]  
Buniyamin N, 2011, Int. J. Syst. Appl. Eng. Dev., V5, P151
[6]  
Chakraborty Jayasree, 2010, 2010 5th International Conference on Industrial and Information Systems (ICIIS 2010), P626, DOI 10.1109/ICIINFS.2010.5578632
[7]   Hybrid MDP based integrated hierarchical Q-learning [J].
Chen ChunLin ;
Dong DaoYi ;
Li Han-Xiong ;
Tarn, Tzyh-Jong .
SCIENCE CHINA-INFORMATION SCIENCES, 2011, 54 (11) :2279-2294
[8]  
Chen TY, 2018, C IND ELECT APPL, P1510, DOI 10.1109/ICIEA.2018.8397948
[9]   A Survey on Deep Transfer Learning [J].
Tan, Chuanqi ;
Sun, Fuchun ;
Kong, Tao ;
Zhang, Wenchang ;
Yang, Chao ;
Liu, Chunfang .
ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING - ICANN 2018, PT III, 2018, 11141 :270-279
[10]   A novel whale optimization algorithm of path planning strategy for mobile robots [J].
Dai, Yaonan ;
Yu, Jiuyang ;
Zhang, Cong ;
Zhan, Bowen ;
Zheng, Xiaotao .
APPLIED INTELLIGENCE, 2023, 53 (09) :10843-10857