DVDS: A deep visual dynamic slam system

被引:4
|
作者
Xie, Tao [1 ]
Sun, Qihao [1 ]
Sun, Tao [1 ]
Zhang, Jinhang [1 ]
Dai, Kun [1 ]
Zhao, Lijun [1 ]
Wang, Ke [1 ]
Li, Ruifeng [1 ]
机构
[1] Harbin Inst Technol, State Key Lab Robot & Syst, Harbin 150001, Peoples R China
基金
中国国家自然科学基金;
关键词
simultaneous localization and mapping; Transformer; Deep learning; VERSATILE;
D O I
10.1016/j.eswa.2024.125438
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Simultaneous localization and mapping (SLAM) utilizing visual sensors represent an extensively investigated research area, holding significant potential for advancements in robotics and autonomous vehicular systems. Recently, dense SLAM systems underpinned by learning-based methodologies have showcased superior accuracy and robustness compared to conventional techniques. Nevertheless, contemporary learning-based SLAM systems exhibit notable discrepancies in pose estimation, particularly within dynamic environments. In addition, the constrained receptive field of convolutional features in these methods impedes their efficacy when confronted with homogeneous, texture-less images, rendering them vulnerable to noise perturbations. We develop a novel deep visual dynamic slam (DVDS) system that exploits solely static pixels within images to retrieve the camera poses. Specifically, we formulate a dynamic object exclusion mechanism that excises dynamic constituents within the scene before the optical flow computation, thus optimizing the precision of the estimation. In addition, we unveil an efficient dispersive transformer (DisFormer) that facilitates per-pixel features in assimilating long-range information from surrounding features, culminating in constructing more precise 4D correlation volumes. Building on the DisFormer, we suggest a Disformer-based gated recurrent unit (GRU) to generate a refined flow field coupled with a confidence map, which is subsequently employed by the dense bundle adjustment layer to iteratively rectify the residuals of inverse depths and associated camera poses. The global receptive field provided by the DisFormer promotes information integration from a wider contextual window, thus improving the robustness of our SLAM system. Comprehensive experiments underscore that our proposed DVDS system manifests superior efficacy compared with state-of-the-art works across both static and dynamic scenes.
引用
收藏
页数:15
相关论文
共 50 条
  • [21] A robust visual SLAM system in dynamic man-made environments
    LIU JiaCheng
    MENG ZiYang
    YOU Zheng
    Science China(Technological Sciences), 2020, (09) : 1628 - 1636
  • [22] A robust visual SLAM system in dynamic man-made environments
    JiaCheng Liu
    ZiYang Meng
    Zheng You
    Science China Technological Sciences, 2020, 63 : 1628 - 1636
  • [23] A robust visual SLAM system in dynamic man-made environments
    Liu, JiaCheng
    Meng, ZiYang
    You, Zheng
    SCIENCE CHINA-TECHNOLOGICAL SCIENCES, 2020, 63 (09) : 1628 - 1636
  • [24] A Review of Visual SLAM for Dynamic Objects
    Zhao, Lina
    Wei, Baoguo
    Li, Lixin
    Li, Xu
    2022 IEEE 17TH CONFERENCE ON INDUSTRIAL ELECTRONICS AND APPLICATIONS (ICIEA), 2022, : 1080 - 1085
  • [25] DOE-SLAM: Dynamic Object Enhanced Visual SLAM
    Hu, Xiao
    Lang, Jochen
    SENSORS, 2021, 21 (09)
  • [26] Review of Visual SLAM in Dynamic Environment
    Wang K.
    Yao X.
    Huang Y.
    Liu M.
    Lu Y.
    Jiqiren/Robot, 2021, 43 (06): : 715 - 732
  • [27] A review of visual SLAM with dynamic objects
    Qin, Yong
    Yu, Haidong
    INDUSTRIAL ROBOT-THE INTERNATIONAL JOURNAL OF ROBOTICS RESEARCH AND APPLICATION, 2023, 50 (06): : 1000 - 1010
  • [28] OVD-SLAM: An Online Visual SLAM for Dynamic Environments
    He, Jiaming
    Li, Mingrui
    Wang, Yangyang
    Wang, Hongyu
    IEEE SENSORS JOURNAL, 2023, 23 (12) : 13210 - 13219
  • [29] SOF-SLAM: A Semantic Visual SLAM for Dynamic Environments
    Cui, Linyan
    Ma, Chaowei
    IEEE ACCESS, 2019, 7 : 166528 - 166539
  • [30] Semantic visual SLAM in dynamic environment
    Wen, Shuhuan
    Li, Pengjiang
    Zhao, Yongjie
    Zhang, Hong
    Sun, Fuchun
    Wang, Zhe
    AUTONOMOUS ROBOTS, 2021, 45 (04) : 493 - 504