SALSA: Swift Adaptive Lightweight Self-Attention for Enhanced LiDAR Place Recognition

被引:1
|
作者
Goswami, Raktim Gautam [1 ]
Patel, Naman [1 ]
Krishnamurthy, Prashanth [1 ]
Khorrami, Farshad [1 ]
机构
[1] NYU Tandon Sch Engn, Dept Elect & Comp Engn, Control Robot Res Lab CRRL, Brooklyn, NY 11201 USA
来源
IEEE ROBOTICS AND AUTOMATION LETTERS | 2024年 / 9卷 / 10期
关键词
Point cloud compression; Laser radar; Feature extraction; Location awareness; Mixers; Transformers; Deep learning; Deep learning for visual perception; deep learning methods; localization; representation learning;
D O I
10.1109/LRA.2024.3440098
中图分类号
TP24 [机器人技术];
学科分类号
080202 ; 1405 ;
摘要
Large-scale LiDAR mappings and localization leverage place recognition techniques to mitigate odometry drifts, ensuring accurate mapping. These techniques utilize scene representations from LiDAR point clouds to identify previously visited sites within a database. Local descriptors, assigned to each point within a point cloud, are aggregated to form a scene representation for the point cloud. These descriptors are also used to re-rank the retrieved point clouds based on geometric fitness scores. We propose SALSA, a novel, lightweight, and efficient framework for LiDAR place recognition. It consists of a Sphereformer backbone that uses radial window attention to enable information aggregation for sparse distant points, an adaptive self-attention layer to pool local descriptors into tokens, and a multi-layer-perceptron Mixer layer for aggregating the tokens to generate a scene descriptor. The proposed framework outperforms existing methods on various LiDAR place recognition datasets in terms of both retrieval and metric localization while operating in real-time.
引用
收藏
页码:8242 / 8249
页数:8
相关论文
共 50 条
  • [1] A lightweight transformer with linear self-attention for defect recognition
    Zhai, Yuwen
    Li, Xinyu
    Gao, Liang
    Gao, Yiping
    ELECTRONICS LETTERS, 2024, 60 (17)
  • [2] Lightweight Smoke Recognition Based on Deep Convolution and Self-Attention
    Zhao, Yang
    Wang, Yigang
    Jung, Hoi-Kyung
    Jin, Yongqiang
    Hua, Dan
    Xu, Sen
    MATHEMATICAL PROBLEMS IN ENGINEERING, 2022, 2022
  • [3] Att-Net: Enhanced emotion recognition system using lightweight self-attention module
    Mustaqeem
    Kwon, Soonil
    APPLIED SOFT COMPUTING, 2021, 102
  • [4] Lightweight Vision Transformer with Spatial and Channel Enhanced Self-Attention
    Zheng, Jiahao
    Yang, Longqi
    Li, Yiying
    Yang, Ke
    Wang, Zhiyuan
    Zhou, Jun
    2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION WORKSHOPS, ICCVW, 2023, : 1484 - 1488
  • [5] ESAformer: Enhanced Self-Attention for Automatic Speech Recognition
    Li, Junhua
    Duan, Zhikui
    Li, Shiren
    Yu, Xinmei
    Yang, Guangguang
    IEEE SIGNAL PROCESSING LETTERS, 2024, 31 : 471 - 475
  • [6] Region Adaptive Self-Attention for an Accurate Facial Emotion Recognition
    Lee, Seongmin
    Lee, Jeonghaeng
    Kim, Minsik
    Lee, Sanghoon
    PROCEEDINGS OF 2022 ASIA-PACIFIC SIGNAL AND INFORMATION PROCESSING ASSOCIATION ANNUAL SUMMIT AND CONFERENCE (APSIPA ASC), 2022, : 791 - 796
  • [7] Lightweight Self-Attention Network for Semantic Segmentation
    Zhou, Yan
    Zhou, Haibin
    Li, Nanjun
    Li, Jianxun
    Wang, Dongli
    2022 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2022,
  • [8] A Static Sign Language Recognition Method Enhanced with Self-Attention Mechanisms
    Wang, Yongxin
    Jiang, He
    Sun, Yutong
    Xu, Longqi
    SENSORS, 2024, 24 (21)
  • [9] Self-attention for Speech Emotion Recognition
    Tarantino, Lorenzo
    Garner, Philip N.
    Lazaridis, Alexandros
    INTERSPEECH 2019, 2019, : 2578 - 2582
  • [10] Lightweight Self-Attention Residual Network for Hyperspectral Classification
    Xia, Jinbiao
    Cui, Ying
    Li, Wenshan
    Wang, Liguo
    Wang, Chao
    IEEE GEOSCIENCE AND REMOTE SENSING LETTERS, 2022, 19