RGB-D SLAM in Dynamic Environments with Multilevel Semantic Mapping

被引:0
|
作者
Yusheng Qin
Tiancan Mei
Zhi Gao
Zhipeng Lin
Weiwei Song
Xuhui Zhao
机构
[1] Wuhan University,School of Electronic Information
[2] Wuhan University,School of Remote Sensing and Information Engineering
[3] The Chinese University of Hong Kong,Department of Mechanical and Automation Engineering
[4] Peng Cheng Laboratory,Department of Mathematics and Theories
来源
Journal of Intelligent & Robotic Systems | 2022年 / 105卷
关键词
RGB-D SLAM; Semantic mapping; Dynamic environment; Object detection;
D O I
暂无
中图分类号
学科分类号
摘要
Dynamic environments pose a severe challenge to visual SLAM as moving objects invalidate the assumption of a static background. While recent works employ deep learning to address the challenge, they still fail to determine whether an object actually moves or not, resulting in the misguidance of object tracking and background reconstruction. Hence we design a SLAM system to simultaneously estimate trajectory and construct object-level dense 3D semantic maps in dynamic environments. Synergizing deep learning-based object detection, we leverage geometric constraints by using optical flow and the relationship between objects to identify those moving but predefined static objects. To construct more precise 3D semantic maps, our method employs an unsupervised algorithm to segment 3D point cloud generated by depth data into meaningful clusters. The 3D point clusters are then synergized with semantic cues generated by deep learning to produce a more accurate 3D semantic map. We evaluate the proposed system on TUM RGB-D dataset and ICL-NUIM dataset as well as in real-world indoor environments. Qualitative and quantitative experiments show that our method outperforms state-of-the-art approaches in various dynamic scenes in terms of both accuracy and robustness.
引用
收藏
相关论文
共 50 条
  • [1] RGB-D SLAM in Dynamic Environments with Multilevel Semantic Mapping
    Qin, Yusheng
    Mei, Tiancan
    Gao, Zhi
    Lin, Zhipeng
    Song, Weiwei
    Zhao, Xuhui
    JOURNAL OF INTELLIGENT & ROBOTIC SYSTEMS, 2022, 105 (04)
  • [2] Semantic Segmentation based Dense RGB-D SLAM in Dynamic Environments
    Zhang, Jianbo
    Liu, Yanjie
    Chen, Junguo
    Ma, Liulong
    Jin, Dong
    Chen, Jiao
    2019 3RD INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE, AUTOMATION AND CONTROL TECHNOLOGIES (AIACT 2019), 2019, 1267
  • [3] RS-SLAM: A Robust Semantic SLAM in Dynamic Environments Based on RGB-D Sensor
    Ran, Teng
    Yuan, Liang
    Zhang, Jianbo
    Tang, Dingxin
    He, Li
    IEEE SENSORS JOURNAL, 2021, 21 (18) : 20657 - 20664
  • [4] Towards Real-time Semantic RGB-D SLAM in Dynamic Environments
    Ji, Tete
    Wang, Chen
    Xie, Lihua
    2021 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA 2021), 2021, : 11175 - 11181
  • [5] Improving RGB-D SLAM in dynamic environments using semantic aided segmentation
    Kenye, Lhilo
    Kala, Rahul
    ROBOTICA, 2022, 40 (06) : 2065 - 2090
  • [6] Robust RGB-D SLAM in Dynamic Environments using Geometry and Semantic Information
    Xiao, Yao
    Zou, Junjie
    Jin, Ronghe
    Mei, Tiancan
    2024 IEEE 18TH INTERNATIONAL CONFERENCE ON CONTROL & AUTOMATION, ICCA 2024, 2024, : 731 - 736
  • [7] Ground Enhanced RGB-D SLAM for Dynamic Environments
    Guo, Ruibin
    Liu, Xinghua
    2021 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND BIOMIMETICS (IEEE-ROBIO 2021), 2021, : 1171 - 1177
  • [8] Robust and Efficient RGB-D SLAM in Dynamic Environments
    Yang, Xin
    Yuan, Zikang
    Zhu, Dongfu
    Chi, Cheng
    Li, Kun
    Liao, Chunyuan
    IEEE TRANSACTIONS ON MULTIMEDIA, 2021, 23 : 4208 - 4219
  • [9] Improving RGB-D SLAM accuracy in dynamic environments based on semantic and geometric constraints
    Wang, Xiqi
    Zheng, Shunyi
    Lin, Xiaohu
    Zhu, Fengbo
    MEASUREMENT, 2023, 217
  • [10] MSSD-SLAM: Multifeature Semantic RGB-D Inertial SLAM With Structural Regularity for Dynamic Environments
    Wang, Yanan
    Tian, Yaobin
    Chen, Jiawei
    Chen, Cheng
    Xu, Kun
    Ding, Xilun
    IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, 2025, 74