Scene perception based visual navigation of mobile robot in indoor environment

被引:45
作者
Ran, T. [1 ]
Yuan, L. [1 ,2 ]
Zhang, J. B. [1 ]
机构
[1] Xinjiang Univ, Sch Mech Engn, Urumqi 830047, Peoples R China
[2] Beijing Univ Chem Technol, Beijing Adv Innovat Ctr Soft Matter Sci & Engn, Beijing, Peoples R China
基金
中国国家自然科学基金;
关键词
Indoor mobile robot; Visual navigation; Convolution neural networks; Scene perception; Obstacle avoidance;
D O I
10.1016/j.isatra.2020.10.023
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Only vision-based navigation is the key of cost reduction and widespread application of indoor mobile robot. Consider the unpredictable nature of artificial environments, deep learning techniques can be used to perform navigation with its strong ability to abstract image features. In this paper, we proposed a low-cost way of only vision-based perception to realize indoor mobile robot navigation, converting the problem of visual navigation to scene classification. Existing related research based on deep scene classification network has lower accuracy and brings more computational burden. Additionally, the navigation system has not yet been fully assessed in the previous work. Therefore, we designed a shallow convolutional neural network (CNN) with higher scene classification accuracy and efficiency to process images captured by a monocular camera. Besides, we proposed an adaptive weighted control (AWC) algorithm and combined with regular control (RC) to improve the robot's motion performance. We demonstrated the capability and robustness of the proposed navigation method by performing extensive experiments in both static and dynamic unknown environments. The qualitative and quantitative results showed that the system performs better compared to previous related work in unknown environments. (C) 2020 ISA. Published by Elsevier Ltd. All rights reserved.
引用
收藏
页码:389 / 400
页数:12
相关论文
共 45 条
[1]  
Abadi M, 2016, PROCEEDINGS OF OSDI'16: 12TH USENIX SYMPOSIUM ON OPERATING SYSTEMS DESIGN AND IMPLEMENTATION, P265
[2]  
[Anonymous], 2014, CUDNN EFFICIENT PRIM
[3]   Pathfinder-Development of Automated Guided Vehicle for Hospital Logistics [J].
Bacik, Jan ;
Durovsky, Frantisek ;
Biros, Milan ;
Kyslan, Karol ;
Perdukova, Daniela ;
Padmanaban, Sanjeevikumar .
IEEE ACCESS, 2017, 5 :26892-26900
[4]   Appearance-Based Indoor Navigation by IBVS Using Line Segments [J].
Bista, Suman Raj ;
Giordano, Paolo Robuffo ;
Chaumette, Francois .
IEEE ROBOTICS AND AUTOMATION LETTERS, 2016, 1 (01) :423-430
[5]  
Blanc G, 2005, IEEE INT CONF ROBOT, P3354
[6]   Visual navigation for mobile robots: A survey [J].
Bonin-Font, Francisco ;
Ortiz, Alberto ;
Oliver, Gabriel .
JOURNAL OF INTELLIGENT & ROBOTIC SYSTEMS, 2008, 53 (03) :263-296
[7]  
Chen W, 2014, 2014 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND BIOMIMETICS IEEE-ROBIO 2014, P1793, DOI 10.1109/ROBIO.2014.7090595
[8]   MonoSLAM: Real-time single camera SLAM [J].
Davison, Andrew J. ;
Reid, Ian D. ;
Molton, Nicholas D. ;
Stasse, Olivier .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2007, 29 (06) :1052-1067
[9]   LSD-SLAM: Large-Scale Direct Monocular SLAM [J].
Engel, Jakob ;
Schoeps, Thomas ;
Cremers, Daniel .
COMPUTER VISION - ECCV 2014, PT II, 2014, 8690 :834-849
[10]  
Fu J., 2017, SMALL