SLGD-Loop: A Semantic Local and Global Descriptor-Based Loop Closure Detection for Long-Term Autonomy

被引:1
作者
Arshad, Saba [1 ]
Kim, Gon-Woo [2 ]
机构
[1] Chungbuk Natl Univ, Dept Control & Robot Engn, Coll Elect & Comp Engn, Cheongju 28644, South Korea
[2] Chungbuk Natl Univ, Dept Intelligent Syst & Robot, Coll Elect & Comp Engn, Cheongju 28644, South Korea
关键词
Autonomous navigation; loop closure detection; semantics; simultaneous localization and mapping; PLACE RECOGNITION; FAB-MAP; LOCALIZATION; SLAM;
D O I
10.1109/TITS.2024.3452158
中图分类号
TU [建筑科学];
学科分类号
0813 ;
摘要
In simultaneous localization and mapping (SLAM), the detection of a true loop closure benefits in relocalization and increased map accuracy. However, its performance is largely affected by variation in light conditions, viewpoints, seasons, and the presence of dynamic objects. Over the past few decades, efforts have been put forth to address these challenges, yet it remains an open problem. Focusing on the advantages of visual semantics to achieve human-like scene understanding, this research investigates semantics-aided visual loop closure detection methods and presents a novel coarse-to-fine loop closure detection method using semantic local and global descriptors (SLGD) for visual SLAM systems. The proposed method exploits low-level and high-level information in a given image thus combining the benefits of local visual features invariant to viewpoint and illumination changes, and global semantics extracted from the specific semantic regions. Robustness is achieved against long-term autonomy through the fusion of global semantic similarity with semantically salient local feature similarity. The proposed SLGD-Loop outperforms state-of-the-art loop closure detection methods on a range of challenging benchmark datasets with significantly improved Recall@N and higher recall rate at 100% precision.
引用
收藏
页码:19714 / 19728
页数:15
相关论文
共 54 条
[41]  
Sünderhauf N, 2015, ROBOTICS: SCIENCE AND SYSTEMS XI
[42]  
Taketomi T., 2017, IPSJ Transactions on Computer Vision and Applications, V9, P1
[43]  
Tolias G, 2016, Arxiv, DOI [arXiv:1511.05879, 10.48550/arxiv.1511.05879, DOI 10.48550/ARXIV.1511.05879]
[44]   TransVPR: Transformer-Based Place Recognition with Multi-Level Attention Aggregation [J].
Wang, Ruotong ;
Shen, Yanqing ;
Zuo, Weiliang ;
Zhou, Sanping ;
Zheng, Nanning .
2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2022, :13638-13647
[45]  
Wang TH, 2018, IEEE INT CONF ROBOT, P2341
[46]  
Wang Y, 2018, 2018 IEEE THIRD INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, APPLICATIONS AND SYSTEMS (IPAS), P7, DOI 10.1109/IPAS.2018.8708875
[47]   A novel fusing semantic- and appearance-based descriptors for visual loop closure detection [J].
Wu, Peng ;
Wang, Junxiao ;
Wang, Chen ;
Zhang, Lei ;
Wang, Yuanzhi .
OPTIK, 2021, 243
[48]  
Xia YF, 2016, IEEE IJCNN, P2274, DOI 10.1109/IJCNN.2016.7727481
[49]  
Xin Z, 2019, IEEE INT CONF ROBOT, P5979, DOI [10.1109/icra.2019.8794383, 10.1109/ICRA.2019.8794383]
[50]  
Yuan Liu, 2019, 2019 IEEE International Conferences on Ubiquitous Computing & Communications (IUCC) and Data Science and Computational Intelligence (DSCI) and Smart Computing, Networking and Services (SmartCNS), P312, DOI 10.1109/IUCC/DSCI/SmartCNS.2019.00079