Learning Sequence Descriptor Based on Spatio-Temporal Attention for Visual Place Recognition

被引:2
|
作者
Zhao, Junqiao [1 ,2 ,3 ]
Zhang, Fenglin [1 ,2 ]
Cai, Yingfeng [1 ,2 ]
Tian, Gengxuan [1 ,2 ]
Mu, Wenjie [1 ,2 ]
Ye, Chen [1 ,2 ]
Feng, Tiantian [4 ]
机构
[1] Tongji Univ, Sch Elect & Informat Engn, Dept Comp Sci & Technol, Shanghai 201804, Peoples R China
[2] Tongji Univ, MOE Key Lab Embedded Syst & Serv Comp, Shanghai 201804, Peoples R China
[3] Tongji Univ, Inst Intelligent Vehicles, Shanghai 201804, Peoples R China
[4] Tongji Univ, Sch Surveying & Geoinformat, Shanghai 200092, Peoples R China
关键词
Transformers; Visualization; Encoding; Computer architecture; Task analysis; Simultaneous localization and mapping; Heuristic algorithms; Recognition; localization; SLAM; visual place recognition;
D O I
10.1109/LRA.2024.3354627
中图分类号
TP24 [机器人技术];
学科分类号
080202 ; 1405 ;
摘要
Visual Place Recognition (VPR) aims to retrieve frames from a geotagged database that are located at the same place as the query frame. To improve the robustness of VPR in perceptually aliasing scenarios, sequence-based VPR methods are proposed. These methods are either based on matching between frame sequences or extracting sequence descriptors for direct retrieval. However, the former is usually based on the assumption of constant velocity, which is difficult to hold in practice, and is computationally expensive and subject to sequence length. Although the latter overcomes these problems, existing sequence descriptors are constructed by aggregating features of multiple frames only, without interaction on temporal information, and thus cannot obtain descriptors with spatio-temporal discrimination. In this letter, we propose a sequence descriptor that effectively incorporates spatio-temporal information. Specifically, spatial attention within the same frame is utilized to learn spatial feature patterns, while attention in corresponding local regions of different frames is utilized to learn the persistence or change of features over time. We use a sliding window to control the temporal range of attention and use relative positional encoding to construct sequential relationships between different features. This allows our descriptors to capture the intrinsic dynamics in a sequence of frames. Comprehensive experiments on challenging benchmark datasets show that the proposed approach outperforms recent state-of-the-art methods.
引用
收藏
页码:2351 / 2358
页数:8
相关论文
共 50 条
  • [1] Spatio-Temporal Attention Networks for Action Recognition and Detection
    Li, Jun
    Liu, Xianglong
    Zhang, Wenxuan
    Zhang, Mingyuan
    Song, Jingkuan
    Sebe, Nicu
    IEEE TRANSACTIONS ON MULTIMEDIA, 2020, 22 (11) : 2990 - 3001
  • [2] STA-VPR: Spatio-Temporal Alignment for Visual Place Recognition
    Lu, Feng
    Chen, Baifan
    Zhou, Xiang-Dong
    Song, Dezhen
    IEEE ROBOTICS AND AUTOMATION LETTERS, 2021, 6 (03): : 4297 - 4304
  • [3] Recurrent Prediction With Spatio-Temporal Attention for Crowd Attribute Recognition
    Li, Qiaozhe
    Zhao, Xin
    He, Ran
    Huang, Kaiqi
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2020, 30 (07) : 2167 - 2177
  • [4] Semantic Reinforced Attention Learning for Visual Place Recognition
    Peng, Guohao
    Yue, Yufeng
    Zhang, Jun
    Wu, Zhenyu
    Tang, Xiaoyu
    Wang, Danwei
    2021 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA 2021), 2021, : 13415 - 13422
  • [5] Joint Spatio-Temporal Similarity and Discrimination Learning for Visual Tracking
    Liang, Yanjie
    Chen, Haosheng
    Wu, Qiangqiang
    Xia, Changqun
    Li, Jia
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2024, 34 (08) : 7284 - 7300
  • [6] Efficient Spatio-Temporal Contrastive Learning for Skeleton-Based 3-D Action Recognition
    Gao, Xuehao
    Yang, Yang
    Zhang, Yimeng
    Li, Maosen
    Yu, Jin-Gang
    Du, Shaoyi
    IEEE TRANSACTIONS ON MULTIMEDIA, 2023, 25 : 405 - 417
  • [7] Spatio-Temporal Fusion based Convolutional Sequence Learning for Lip Reading
    Zhang, Xingxuan
    Cheng, Feng
    Wang, Shilin
    2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019), 2019, : 713 - 722
  • [8] Fluxformer: Flow-Guided Duplex Attention Transformer via Spatio-Temporal Clustering for Action Recognition
    Hong, Younggi
    Kim, Min Ju
    Lee, Isack
    Yoo, Seok Bong
    IEEE ROBOTICS AND AUTOMATION LETTERS, 2023, 8 (10) : 6411 - 6418
  • [9] A spatio-temporal Long-term Memory approach for visual place recognition in mobile robotic navigation
    Vu Anh Nguyen
    Starzyk, Janusz A.
    Goh, Wool-Boon
    ROBOTICS AND AUTONOMOUS SYSTEMS, 2013, 61 (12) : 1744 - 1758
  • [10] Event-Based Visual Place Recognition With Ensembles of Temporal Windows
    Fischer, Tobias
    Milford, Michael
    IEEE ROBOTICS AND AUTOMATION LETTERS, 2020, 5 (04): : 6924 - 6931