Patchlpr: a multi-level feature fusion transformer network for LiDAR-based place recognition

被引:0
|
作者
Sun, Yang [1 ,2 ]
Guo, Jianhua [1 ,3 ]
Wang, Haiyang [4 ]
Zhang, Yuhang [1 ,3 ]
Zheng, Jiushuai [1 ,3 ]
Tian, Bin [5 ]
机构
[1] Hebei Univ Engn, Coll Mech & Equipment Engn, Handan 056038, Peoples R China
[2] Key Lab Intelligent Ind Equipment Technol Hebei Pr, Handan, Hebei, Peoples R China
[3] Handan Key Lab Intelligent Vehicles, Handan, Hebei, Peoples R China
[4] Jizhong Energy Fengfeng Grp Co Ltd, 16 Unicom South Rd, Handan, Hebei, Peoples R China
[5] Chinese Acad Sci, Inst Automat, Beijing, Peoples R China
关键词
SLAM; LiDAR Place recognition; Deep learning; Patch; VISION; DEEP;
D O I
10.1007/s11760-024-03138-9
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
LiDAR-based place recognition plays a crucial role in autonomous vehicles, enabling the identification of locations in GPS-invalid environments that were previously accessed. Localization in place recognition can be achieved by searching for nearest neighbors in the database. Two common types of place recognition features are local descriptors and global descriptors. Local descriptors typically compactly represent regions or points, while global descriptors provide an overarching view of the data. Despite the significant progress made in recent years by both types of descriptors, any representation inevitably involves information loss. To overcome this limitation, we have developed PatchLPR, a Transformer network employing multi-level feature fusion for robust place recognition. PatchLPR integrates global and local feature information, focusing on meaningful regions on the feature map to generate an environmental representation. We propose a patch feature extraction module based on the Vision Transformer to fully leverage the information and correlations of different features. We evaluated our approach on the KITTI dataset and a self-collected dataset covering over 4.2 km. The experimental results demonstrate that our method effectively utilizes multi-level features to enhance place recognition performance.
引用
收藏
页码:157 / 165
页数:9
相关论文
共 50 条
  • [21] Multi-level feature fusion pyramid network for object detection
    Guo, Zebin
    Shuai, Hui
    Liu, Guangcan
    Zhu, Yisheng
    Wang, Wenqing
    VISUAL COMPUTER, 2023, 39 (09): : 4267 - 4277
  • [22] Multi-level feature fusion capsule network with self-attention for facial expression recognition
    Huang, Zhiji
    Yu, Songsen
    Liang, Jun
    JOURNAL OF ELECTRONIC IMAGING, 2023, 32 (02)
  • [23] Multi-level feature fusion network for neuronal morphology classification
    Sun, Chunli
    Zhao, Feng
    FRONTIERS IN NEUROSCIENCE, 2024, 18
  • [24] Multi-level interactive fusion network based on adversarial learning for fusion classification of hyperspectral and LiDAR data
    Fan, Yingying
    Qian, Yurong
    Gong, Weijun
    Chu, Zhuang
    Qin, Yugang
    Muhetaer, Palidan
    EXPERT SYSTEMS WITH APPLICATIONS, 2024, 257
  • [25] Attention-based interactive multi-level feature fusion for named entity recognition
    Xu, Yiwu
    Chen, Yun
    SCIENTIFIC REPORTS, 2025, 15 (01):
  • [26] Multi-level Feature Fusion SAR Automatic Target Recognition Based on Deep Forest
    Li Lu
    Du Lan
    He Haonan
    Li Chen
    Deng Shen
    JOURNAL OF ELECTRONICS & INFORMATION TECHNOLOGY, 2021, 43 (03) : 606 - 614
  • [27] Spherical Transformer for LiDAR-based 3D Recognition
    Lai, Xin
    Chen, Yukang
    Lu, Fanbin
    Liu, Jianhui
    Jia, Jiaya
    2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2023, : 17545 - 17555
  • [28] Multi-Level Local Feature Coding Fusion for Music Genre Recognition
    Ng, Wing W. Y.
    Zeng, Weijie
    Wang, Ting
    IEEE ACCESS, 2020, 8 : 152713 - 152727
  • [29] IS-CAT: Intensity-Spatial Cross-Attention Transformer for LiDAR-Based Place Recognition
    Joo, Hyeong-Jun
    Kim, Jaeho
    SENSORS, 2024, 24 (02)
  • [30] Remote Sensing Image Segmentation Network Based on Multi-Level Feature Refinement and Fusion
    Jian Yongsheng
    Zhu Daming
    Fu Zhitao
    Wen Shiya
    LASER & OPTOELECTRONICS PROGRESS, 2023, 60 (04)