A comprehensive overview of dynamic visual SLAM and deep learning: concepts, methods and challenges

被引:15
|
作者
Beghdadi, Ayman [1 ]
Mallem, Malik [1 ]
机构
[1] Paris Saclay Univ, Univ Evry, IBISC Lab, F-91025 Evry, France
关键词
Survey; SLAM (simultaneous localization and mapping); Visual SLAM; Deep learning; Environmental perception; Mobile robot; INERTIAL ODOMETRY; SIMULTANEOUS LOCALIZATION; CAMERA CALIBRATION; MONOCULAR SLAM; VERSATILE; VISION; ROBUST; SCALE;
D O I
10.1007/s00138-022-01306-w
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The visual SLAM (vSLAM) is a research topic that has been developing rapidly in recent years, especially with the renewed interest in machine learning and, more particularly, deep-learning-based approaches. Nowadays, main research is carried out to improve accuracy and robustness in complex and dynamic environments. This scorching topic has reached a significant level of maturity. This paper presents a relatively detailed and easily understood survey of vSLAM within deep learning. This study attempts to meet this challenge by better organizing the literature, explaining the basic concepts and tools, and presenting the current trends. The contributions of this study can be summarized in three essential steps. The first one is to provide the state-of-the-art in an incremental way following the classical processes of vSLAM-based systems. The second is to give our short- and medium-term view of the development of this very active and evolving field. Finally, we share our opinions on this subject and its interactions with new trends and, more particularly, the deep learning paradigm. We believe that this contribution will be an overview and, more importantly, a critical and detailed vision that serves as a roadmap in the field of vSLAMs both in terms of models and concepts and in terms of associated technologies.
引用
收藏
页数:28
相关论文
共 50 条
  • [1] A comprehensive overview of dynamic visual SLAM and deep learning: concepts, methods and challenges
    Ayman Beghdadi
    Malik Mallem
    Machine Vision and Applications, 2022, 33
  • [2] Overview of deep learning application on visual SLAM
    Li, Shaopeng
    Zhang, Daqiao
    Xian, Yong
    Li, Bangjie
    Zhang, Tao
    Zhong, Chengliang
    DISPLAYS, 2022, 74
  • [3] A Survey of Deep Learning Application in Dynamic Visual SLAM
    Lai, Dongcheng
    Zhang, Yunjian
    Li, Congduan
    2020 INTERNATIONAL CONFERENCE ON BIG DATA & ARTIFICIAL INTELLIGENCE & SOFTWARE ENGINEERING (ICBASE 2020), 2020, : 279 - 283
  • [4] Visual SLAM algorithm in dynamic environment based on deep learning
    Yu, Yingjie
    Chen, Shuai
    Yang, Xinpeng
    Xu, Changzhen
    Zhang, Sen
    Xiao, Wendong
    INDUSTRIAL ROBOT-THE INTERNATIONAL JOURNAL OF ROBOTICS RESEARCH AND APPLICATION, 2025, 52 (01): : 28 - 35
  • [5] Deep learning-based visual slam for indoor dynamic scenes
    Xu, Zhendong
    Song, Yong
    Pang, Bao
    Xu, Qingyang
    Yuan, Xianfeng
    APPLIED INTELLIGENCE, 2025, 55 (06)
  • [6] A comprehensive overview of core modules in visual SLAM framework
    Cai, Dupeng
    Li, Ruoqing
    Hu, Zhuhua
    Lu, Junlin
    Li, Shijiang
    Zhao, Yaochi
    NEUROCOMPUTING, 2024, 590
  • [7] Ongoing Evolution of Visual SLAM from Geometry to Deep Learning: Challenges and Opportunities
    Li, Ruihao
    Wang, Sen
    Gu, Dongbing
    COGNITIVE COMPUTATION, 2018, 10 (06) : 875 - 889
  • [8] Ongoing Evolution of Visual SLAM from Geometry to Deep Learning: Challenges and Opportunities
    Ruihao Li
    Sen Wang
    Dongbing Gu
    Cognitive Computation, 2018, 10 : 875 - 889
  • [9] Visual SLAM method for dynamic environment based on deep learning image features
    Liu D.
    Yu T.
    Cong M.
    Du Y.
    Huazhong Keji Daxue Xuebao (Ziran Kexue Ban)/Journal of Huazhong University of Science and Technology (Natural Science Edition), 2024, 52 (06): : 156 - 163
  • [10] DVDS: A deep visual dynamic slam system
    Xie, Tao
    Sun, Qihao
    Sun, Tao
    Zhang, Jinhang
    Dai, Kun
    Zhao, Lijun
    Wang, Ke
    Li, Ruifeng
    EXPERT SYSTEMS WITH APPLICATIONS, 2025, 260