A novel mobile robot navigation method based on deep reinforcement learning

被引:36
作者
Quan, Hao [1 ,2 ]
Li, Yansheng [1 ,2 ]
Zhang, Yi [1 ,2 ]
机构
[1] Chongqing Univ Posts & Telecommun, Res Ctr Intelligent Syst & Robot, Chongqing 400065, Peoples R China
[2] Chongqing Univ Posts & Telecommun, Sch Adv Mfg Engn, Chongqing, Peoples R China
基金
中国国家自然科学基金;
关键词
Deep reinforcement learning; robot exploration; recurrent neural network; DDQN;
D O I
10.1177/1729881420921672
中图分类号
TP24 [机器人技术];
学科分类号
080202 ; 1405 ;
摘要
At present, the application of mobile robots is more and more extensive, and the movement of mobile robots cannot be separated from effective navigation, especially path exploration. Aiming at navigation problems, this article proposes a method based on deep reinforcement learning and recurrent neural network, which combines double net and recurrent neural network modules with reinforcement learning ideas. At the same time, this article designed the corresponding parameter function to improve the performance of the model. In order to test the effectiveness of this method, based on the grid map model, this paper trains in a two-dimensional simulation environment, a three-dimensional TurtleBot simulation environment, and a physical robot environment, and obtains relevant data for peer-to-peer analysis. The experimental results show that the proposed algorithm has a good improvement in path finding efficiency and path length.
引用
收藏
页数:11
相关论文
共 20 条
[1]  
[Anonymous], ABS170511159 ARXIV
[2]  
[Anonymous], 2018, IEEE INT C ROB COMP
[3]  
[Anonymous], ABS160705077 ARXIV
[4]  
[Anonymous], COMPUT SCI
[5]  
[Anonymous], 2007, IEEE ACM INT S MIX A
[6]  
[Anonymous], 2017, PLOS ONE
[7]  
[Anonymous], IEEE INT C CONTR SYS
[8]  
[Anonymous], 2015, RXIV150902971
[9]  
[Anonymous], INT C ART NEUR NETW
[10]  
[Anonymous], ICASSP 2018 2018 IEE