SG-LPR: Semantic-Guided LiDAR-Based Place Recognition

被引:1
|
作者
Jiang, Weizhong [1 ]
Xue, Hanzhang [1 ,2 ]
Si, Shubin [1 ,3 ]
Min, Chen [4 ]
Xiao, Liang [1 ]
Nie, Yiming [1 ]
Dai, Bin [1 ]
机构
[1] Def Innovat Inst, Unmanned Syst Technol Res Ctr, Beijing 100071, Peoples R China
[2] Natl Univ Def Technol, Test Ctr, Xian 710106, Peoples R China
[3] Harbin Engn Univ, Coll Intelligent Syst Sci & Engn, Harbin 150001, Peoples R China
[4] Chinese Acad Sci, Inst Comp Technol, Beijing 100190, Peoples R China
关键词
LiDAR-based place recognition; semantic-guided; auxiliary task; swin transformer; U-Net; SCAN CONTEXT;
D O I
10.3390/electronics13224532
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Place recognition plays a crucial role in tasks such as loop closure detection and re-localization in robotic navigation. As a high-level representation within scenes, semantics enables models to effectively distinguish geometrically similar places, therefore enhancing their robustness to environmental changes. Unlike most existing semantic-based LiDAR place recognition (LPR) methods that adopt a multi-stage and relatively segregated data-processing and storage pipeline, we propose a novel end-to-end LPR model guided by semantic information-SG-LPR. This model introduces a semantic segmentation auxiliary task to guide the model in autonomously capturing high-level semantic information from the scene, implicitly integrating these features into the main LPR task, thus providing a unified framework of "segmentation-while-describing" and avoiding additional intermediate data-processing and storage steps. Moreover, the semantic segmentation auxiliary task operates only during model training, therefore not adding any time overhead during the testing phase. The model also combines the advantages of Swin Transformer and U-Net to address the shortcomings of current semantic-based LPR methods in capturing global contextual information and extracting fine-grained features. Extensive experiments conducted on multiple sequences from the KITTI and NCLT datasets validate the effectiveness, robustness, and generalization ability of our proposed method. Our approach achieves notable performance improvements over state-of-the-art methods.
引用
收藏
页数:21
相关论文
共 50 条
  • [1] LiDAR-Based Semantic Place Recognition in Dynamic Urban Environments
    Wei, Kun
    Ni, Peizhou
    Li, Xu
    Hu, Yue
    Hu, Weiming
    IEEE SENSORS JOURNAL, 2024, 24 (17) : 28397 - 28408
  • [2] Context for LiDAR-based Place Recognition
    Li, Jiahao
    Qian, Hui
    Du, Xin
    2023 21ST INTERNATIONAL CONFERENCE ON ADVANCED ROBOTICS, ICAR, 2023, : 107 - 112
  • [3] LiDAR-Based Place Recognition For Autonomous Driving: A Survey
    Zhang, Yongjun
    Shi, Pengcheng
    Li, Jiayuan
    ACM COMPUTING SURVEYS, 2025, 57 (04)
  • [4] Scene Overlap Prediction for LiDAR-Based Place Recognition
    Zhang, Yingjian
    Dai, Chenguang
    Zhou, Ruqin
    Zhang, Zhenchao
    Ji, Hongliang
    Fan, Huixin
    Zhang, Yongsheng
    Wang, Hanyun
    IEEE GEOSCIENCE AND REMOTE SENSING LETTERS, 2023, 20 : 1 - 5
  • [5] Stabilize an Unsupervised Feature Learning for LiDAR-based Place Recognition
    Yin, Peng
    Xu, Lingyun
    Liu, Zhe
    Li, Lu
    Salman, Hadi
    He, Yuqing
    Xu, Weiliang
    Wang, Hesheng
    Choset, Howie
    2018 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2018, : 1162 - 1167
  • [6] SG-NeRF: Semantic-guided Point-based Neural Radiance Fields
    Qu, Yansong
    Wang, Yuze
    Qi, Yue
    2023 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO, ICME, 2023, : 570 - 575
  • [7] SC-LPR: Spatiotemporal context based LiDAR place recognition
    Dai, Deyun
    Wang, Jikai
    Chen, Zonghai
    Bao, Peng
    Pattern Recognition Letters, 2022, 156 : 160 - 166
  • [8] SC-LPR: Spatiotemporal context based LiDAR place recognition
    Dai, Deyun
    Wang, Jikai
    Chen, Zonghai
    Bao, Peng
    PATTERN RECOGNITION LETTERS, 2022, 156 : 160 - 166
  • [9] Improved semantic-guided network for skeleton-based action recognition
    Mansouri, Amine
    Bakir, Toufik
    Elzaar, Abdellah
    JOURNAL OF VISUAL COMMUNICATION AND IMAGE REPRESENTATION, 2024, 104
  • [10] Semantic-guided de-attention with sharpened triplet marginal loss for visual place recognition
    Choi, Seung-Min
    Lee, Seung-Ik
    Lee, Jae-Yeong
    Kweon, In So
    PATTERN RECOGNITION, 2023, 141