Indoor Localization and Navigation based on Deep Learning using a Monocular Visual System

被引:0
作者
Ancona R.E.A. [1 ]
Ramírez L.G.C. [1 ]
Frías O.O.G. [1 ]
机构
[1] UPIITA-IPN SEPI Section, Instituto Politécnico Nacional, Mexico City
来源
International Journal of Advanced Computer Science and Applications | 2021年 / 12卷 / 06期
关键词
autonomous navigation; feature extractor; object detection; Visual localization; visual navigation;
D O I
10.14569/IJACSA.2021.0120611
中图分类号
学科分类号
摘要
Now-a-days, computer systems are important for artificial vision systems to analyze the acquired data to realize crucial tasks, such as localization and navigation. For successful navigation, the robot must interpret the acquired data and determine its position to decide how to move through the environment. This paper proposes an indoor mobile robot visual-localization and navigation approach for autonomous navigation. A convolutional neural network and background modeling are used to locate the system in the environment. Object detection is based on copy-move detection, an image forensic technique, extracting features from the image to identify similar regions. An adaptive threshold is proposed due to the illumination changes. The detected object is classified to evade it using a control deep neural network. A U-Net model is implemented to track the path trajectory. The experiment results were obtained from real data, proving the efficiency of the proposed algorithm. The adaptive threshold solves illumination variation issues for object detection. © 2021
引用
收藏
页码:79 / 86
页数:7
相关论文
共 47 条
[1]  
Rantanen J., Makela M., Ruotsalainen L., Jaakkola M., Motion Context Adaptive Fusion of Inertial and Visual Pedestrian Navigation, International Conference on Indoor Positioning and Indoor Navigation (IPIN), (2018)
[2]  
Tarao S., Fujiwara Y., Tsuda N., Takata S., Development of Autonomous Mobile Robot Platform Equipped with a Drive Unit Consisting of Low-End In-Wheel Motors, 5th International Conference on Control and Robotics Engineering (ICCRE), (2020)
[3]  
Alfiany N., Jati G., Hamid N., Ramadhani R. A., Santika M. W. D., Jatmiko W., Kinematics and Simulation Model of Autonomous Indonesian “Becak” Robot, 2020 IEEE Region 10 Symposium (TENSYMP), (2020)
[4]  
Pan M., Liu Y., Cao J., Li Y., Li C., Chen C.-H., Visual Recognition Based on Deep Learning for Navigation Mark Classification, IEEE Access, 8, pp. 32767-32775, (2020)
[5]  
Druon R., Yoshiyasu Y., Kanezaki A., Watt A., Visual Object Search by Learning Spatial Context, IEEE Robotics and Automation Letters, 5, 2, pp. 1279-1286, (2020)
[6]  
Faulhammer T., Ambrus R., Burbridge C., Zillich M., Folkesson J., Hawes N., Jensfelt P., Vincze M., Autonomous Learning of Object Models on a Mobile Robot, IEEE Robotics and Automation Letters, 2, 1, pp. 26-33, (2017)
[7]  
Hane C., Heng Lee, Fraundorfer Frugale, Sattler, Pollefeys, 3D Visual perception for self driving cars using a multicamera system: Calibration, mapping, localization and obstacle detection, Image and Vision Computing, pp. 1-35, (2017)
[8]  
Hu Q., Paisitkriangkrai S., Shen C., Hengel A. v. d., Porikli F., Fast Detection of Multiple Objects in Traffic Scenes With a Common Detection Framework, IEEE Transactions on Intelligent Transportation Systems, 17, 4, pp. 1002-1014, (2015)
[9]  
Yasuda Y., Martins L. E., Cappabianco F., Autonomous Visual Navigation for Mobile Robots, ACM Computing Surveys, 53, 1, pp. 1-34, (2020)
[10]  
Huang S., Dissanayake G., Robot Localization: An Introduction, Wiley Encyclopedia of Electrical and Electronics Engineering, (2016)