DKP-SLAM: A Visual SLAM for Dynamic Indoor Scenes Based on Object Detection and Region Probability

被引:0
|
作者
Yin, Menglin [1 ]
Qin, Yong [1 ,2 ,3 ,4 ]
Peng, Jiansheng [1 ,2 ,3 ,4 ]
机构
[1] Guangxi Univ Sci & Technol, Coll Automat, Liuzhou 545000, Peoples R China
[2] Hechi Univ, Dept Artificial Intelligence & Mfg, Hechi 546300, Peoples R China
[3] Educ Dept Guangxi Zhuang Autonomous Reg, Key Lab AI & Informat Proc, Hechi 546300, Peoples R China
[4] Hechi Univ, Sch Chem & Bioengn, Guangxi Key Lab Sericulture Ecol & Appl Intelligen, Hechi 546300, Peoples R China
来源
CMC-COMPUTERS MATERIALS & CONTINUA | 2025年 / 82卷 / 01期
基金
中国国家自然科学基金;
关键词
Visual SLAM; dynamic scene; YOLOX; K-means plus plus clustering; dynamic probability;
D O I
10.32604/cmc.2024.057460
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
In dynamic scenarios, visual simultaneous localization and mapping (SLAM) algorithms often incorrectly incorporate dynamic points during camera pose computation, leading to reduced accuracy and robustness. This paper presents a dynamic SLAM algorithm that leverages object detection and regional dynamic probability. Firstly, a parallel thread employs the YOLOX object detection model to gather 2D semantic information and compensate for missed detections. Next, an improved K-means++ clustering algorithm clusters bounding box regions, adaptively determining the threshold for extracting dynamic object contours as dynamic points change. This process divides the image into low dynamic, suspicious dynamic, and high dynamic regions. In the tracking thread, the dynamic point removal module assigns dynamic probability weights to the feature points in these regions. Combined with geometric methods, it detects and removes the dynamic points. The final evaluation on the public TUM RGB-D dataset shows that the proposed dynamic SLAM algorithm surpasses most existing SLAM algorithms, providing better pose estimation accuracy and robustness in dynamic environments.
引用
收藏
页码:1329 / 1347
页数:19
相关论文
共 50 条
  • [1] Object Detection-based Visual SLAM for Dynamic Scenes
    Zhao, Xinhua
    Ye, Lei
    PROCEEDINGS OF 2022 IEEE INTERNATIONAL CONFERENCE ON MECHATRONICS AND AUTOMATION (IEEE ICMA 2022), 2022, : 1153 - 1158
  • [2] Visual Slam in Dynamic Scenes Based on Object Tracking and Static Points Detection
    Li, Gui-Hai
    Chen, Song-Lin
    JOURNAL OF INTELLIGENT & ROBOTIC SYSTEMS, 2022, 104 (02)
  • [3] Visual Slam in Dynamic Scenes Based on Object Tracking and Static Points Detection
    Gui-Hai Li
    Song-Lin Chen
    Journal of Intelligent & Robotic Systems, 2022, 104
  • [4] Visual SLAM Based on Dynamic Object Detection
    Chen, Bocheng
    Peng, Gang
    He, Dingxin
    Zhou, Cheng
    Hu, Bin
    PROCEEDINGS OF THE 33RD CHINESE CONTROL AND DECISION CONFERENCE (CCDC 2021), 2021, : 5966 - 5971
  • [5] Fusing Semantic Segmentation and Object Detection for Visual SLAM in Dynamic Scenes
    Yu, Peilin
    Guo, Chi
    Liu, Yang
    Zhang, Huyin
    PROCEEDINGS OF 27TH ACM SYMPOSIUM ON VIRTUAL REALITY SOFTWARE AND TECHNOLOGY, VRST 2021, 2021,
  • [6] Dynamic SLAM: A Visual SLAM in Outdoor Dynamic Scenes
    Wen, Shuhuan
    Li, Xiongfei
    Liu, Xin
    Li, Jiaqi
    Tao, Sheng
    Long, Yidan
    Qiu, Tony
    IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, 2023, 72
  • [7] Deep learning-based visual slam for indoor dynamic scenes
    Xu, Zhendong
    Song, Yong
    Pang, Bao
    Xu, Qingyang
    Yuan, Xianfeng
    APPLIED INTELLIGENCE, 2025, 55 (06)
  • [8] Visual SLAM Based on Lightweight Dynamic Object Detection
    Jiang, Changjiang
    Lin, Tong
    Tan, Li
    Zhao, Changhao
    2023 35TH CHINESE CONTROL AND DECISION CONFERENCE, CCDC, 2023, : 1158 - 1163
  • [9] Visual SLAM in dynamic environments based on object detection
    Ai Y.-B.
    Rui T.
    Yang X.-Q.
    He J.-L.
    Fu L.
    Li J.-B.
    Lu M.
    Defence Technology, 2021, 17 (05): : 1712 - 1721
  • [10] Visual SLAM in dynamic environments based on object detection
    Yongbao Ai
    Ting Rui
    Xiaoqiang Yang
    Jialin He
    Lei Fu
    Jianbin Li
    Ming Lu
    Defence Technology, 2021, 17 (05) : 1712 - 1721