Patchlpr: a multi-level feature fusion transformer network for LiDAR-based place recognition

被引:0
作者
Sun, Yang [1 ,2 ]
Guo, Jianhua [1 ,3 ]
Wang, Haiyang [4 ]
Zhang, Yuhang [1 ,3 ]
Zheng, Jiushuai [1 ,3 ]
Tian, Bin [5 ]
机构
[1] Hebei Univ Engn, Coll Mech & Equipment Engn, Handan 056038, Peoples R China
[2] Key Lab Intelligent Ind Equipment Technol Hebei Pr, Handan, Hebei, Peoples R China
[3] Handan Key Lab Intelligent Vehicles, Handan, Hebei, Peoples R China
[4] Jizhong Energy Fengfeng Grp Co Ltd, 16 Unicom South Rd, Handan, Hebei, Peoples R China
[5] Chinese Acad Sci, Inst Automat, Beijing, Peoples R China
关键词
SLAM; LiDAR Place recognition; Deep learning; Patch; VISION; DEEP;
D O I
10.1007/s11760-024-03138-9
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
LiDAR-based place recognition plays a crucial role in autonomous vehicles, enabling the identification of locations in GPS-invalid environments that were previously accessed. Localization in place recognition can be achieved by searching for nearest neighbors in the database. Two common types of place recognition features are local descriptors and global descriptors. Local descriptors typically compactly represent regions or points, while global descriptors provide an overarching view of the data. Despite the significant progress made in recent years by both types of descriptors, any representation inevitably involves information loss. To overcome this limitation, we have developed PatchLPR, a Transformer network employing multi-level feature fusion for robust place recognition. PatchLPR integrates global and local feature information, focusing on meaningful regions on the feature map to generate an environmental representation. We propose a patch feature extraction module based on the Vision Transformer to fully leverage the information and correlations of different features. We evaluated our approach on the KITTI dataset and a self-collected dataset covering over 4.2 km. The experimental results demonstrate that our method effectively utilizes multi-level features to enhance place recognition performance.
引用
收藏
页码:157 / 165
页数:9
相关论文
共 32 条
  • [21] SDC - Stacked Dilated Convolution: A Unified Descriptor Network for Dense Matching Tasks
    Schuster, Rene
    Wasenmueller, Oliver
    Unger, Christian
    Stricker, Didier
    [J]. 2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, : 2551 - 2560
  • [22] PointNetVLAD: Deep Point Cloud Based Retrieval for Large-Scale Place Recognition
    Uy, Mikaela Angelina
    Lee, Gim Hee
    [J]. 2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, : 4470 - 4479
  • [23] Vaswani A, 2017, ADV NEUR IN, V30
  • [24] Locus: LiDAR-based Place Recognition using Spatiotemporal Higher-Order Pooling
    Vidanapathirana, Kavisha
    Moghadam, Peyman
    Harwood, Ben
    Zhao, Muming
    Sridharan, Sridha
    Fookes, Clinton
    [J]. 2021 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA 2021), 2021, : 5075 - 5081
  • [25] Vysotska O., 2017, P IROS WORKSHOP PLAN, V24
  • [26] TransVPR: Transformer-Based Place Recognition with Multi-Level Attention Aggregation
    Wang, Ruotong
    Shen, Yanqing
    Zuo, Weiliang
    Zhou, Sanping
    Zheng, Nanning
    [J]. 2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2022, : 13638 - 13647
  • [27] MinkLoc3D: Point Cloud Based Large-Scale Place Recognition
    Warsaw, Jacek Komorowski
    [J]. 2021 IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV 2021), 2021, : 1789 - 1798
  • [28] DeLightLCD: A Deep and Lightweight Network for Loop Closure Detection in LiDAR SLAM
    Xiang, Haodong
    Zhu, Xiaosheng
    Shi, Wenzhong
    Fan, Wenzheng
    Chen, Pengxin
    Bao, Sheng
    [J]. IEEE SENSORS JOURNAL, 2022, 22 (21) : 20761 - 20772
  • [29] Yin Hang, 2023, ARXIV
  • [30] PCAN: 3D Attention Map Learning Using Contextual Information for Point Cloud Based Retrieval
    Zhang, Wenxiao
    Xiao, Chunxia
    [J]. 2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, : 12428 - 12437