An Efficient and Scalable Collection of Fly-Inspired Voting Units for Visual Place Recognition in Changing Environments

被引:11
作者
Arcanjo, Bruno [1 ]
Ferrarini, Bruno [1 ]
Milford, Michael [2 ]
McDonald-Maier, Klaus D. [1 ]
Ehsan, Shoaib [1 ]
机构
[1] Univ Essex, Sch Comp Sci & Elect Engn, Colchester CO4 3SQ, Essex, England
[2] Queensland Univ Technol, Sch Elect Engn & Comp Sci, Brisbane, Qld 4000, Australia
基金
英国工程与自然科学研究理事会;
关键词
Vision-based navigation; localization; bioinspired robot learning; SIMULTANEOUS LOCALIZATION; IMAGE FEATURES; SCALE;
D O I
10.1109/LRA.2022.3140827
中图分类号
TP24 [机器人技术];
学科分类号
080202 ; 1405 ;
摘要
State-of-the-art visual place recognition performance is currently being achieved utilizing deep learning based approaches. Despite the recent efforts in designing lightweight convolutional neural network based models, these can still be too expensive for the most hardware restricted robot applications. Low-overhead visual place recognition techniques would not only enable platforms equipped with low-end, cheap hardware but also reduce computation on more powerful systems, allowing these resources to be allocated for other navigation tasks. In this work, our goal is to provide an algorithm of extreme compactness and efficiency while achieving state-of-the-art robustness to appearance changes and small point-of-view variations. Our first contribution is DrosoNet, an exceptionally compact model inspired by the odor processing abilities of the fruit fly, Drosophila melanogaster. Our second and main contribution is a voting mechanism that leverages multiple small and efficient classifiers to achieve more robust and consistent visual place recognition compared to a single one. We use DrosoNet as the baseline classifier for the voting mechanism and evaluate our models on five benchmark datasets, assessing moderate to extreme appearance changes and small to moderate viewpoint variations. We then compare the proposed algorithms to state-of-the-art methods, both in terms ofarea under the precision-recall curve results and computational efficiency.
引用
收藏
页码:2527 / 2534
页数:8
相关论文
共 51 条
[1]  
[Anonymous], 2008, COMPUT VIS IMAGE UND, DOI DOI 10.1016/j.cviu.2007.09.014
[2]  
Arandjelovic R, 2018, IEEE T PATTERN ANAL, V40, P1437, DOI [10.1109/TPAMI.2017.2711011, 10.1109/CVPR.2016.572]
[3]   Sequence searching with CNN features for robust and fast visual place recognition [J].
Bai, Dongdong ;
Wang, Chaoqun ;
Zhang, Bo ;
Yi, Xiaodong ;
Yang, Xuejun .
COMPUTERS & GRAPHICS-UK, 2018, 70 :270-280
[4]   A Hybrid Compact Neural Architecture for Visual Place Recognition [J].
Chancan, Marvin ;
Hernandez-Nunez, Luis ;
Narendra, Ajay ;
Barron, Andrew B. ;
Milford, Michael .
IEEE ROBOTICS AND AUTOMATION LETTERS, 2020, 5 (02) :993-1000
[5]  
Chen Z., 2014, P AUSTR C ROB AUT
[6]  
Cope Alex, 2013, Biomimetic and Biohybrid Systems. Second International Conference, Living Machines 2013. Proceedings. LNCS 8064, P362, DOI 10.1007/978-3-642-39802-5_35
[7]   Appearance-only SLAM at large scale with FAB-MAP 2.0 [J].
Cummins, Mark ;
Newman, Paul .
INTERNATIONAL JOURNAL OF ROBOTICS RESEARCH, 2011, 30 (09) :1100-1123
[8]  
Dalal N., 2005, 2005 IEEE COMPUTER S, P886
[9]   A neural algorithm for a fundamental computing problem [J].
Dasgupta, Sanjoy ;
Stevens, Charles F. ;
Navlakha, Saket .
SCIENCE, 2017, 358 (6364) :793-796
[10]  
Davis J., 2006, P 23 INT C MACH LEAR, V148, P233, DOI 10.1145/1143844.1143874