Advancing Simultaneous Localization and Mapping with Multi-Sensor Fusion and Point Cloud De-Distortion

被引:1
|
作者
Shao, Haiyan [1 ]
Zhao, Qingshuai [1 ]
Chen, Hongtang [1 ]
Yang, Weixin [2 ]
Chen, Bin [3 ]
Feng, Zhiquan [4 ]
Zhang, Jinkai [1 ]
Teng, Hao [1 ]
机构
[1] Univ Jinan, Sch Mech Engn, Jinan 250022, Peoples R China
[2] Univ Nevada, Sch Elect & Biomed Engn, Reno, NV 89557 USA
[3] Shandong Youbaote Intelligent Robot Co Ltd, Jinan 250098, Peoples R China
[4] Univ Jinan, Sch Informat Sci & Engn, Jinan 250022, Peoples R China
关键词
obstacle detection; motion distortion; error compensation; simultaneous localization and mapping (SLAM); LiDAR; depth camera; quadruped robot dog; ROBOTS;
D O I
10.3390/machines11060588
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
This study addresses the challenges associated with incomplete or missing information in obstacle detection methods that employ a single sensor. Additionally, it tackles the issue of motion distortion in LiDAR point cloud data during synchronization and mapping in complex environments. The research introduces two significant contributions. Firstly, a novel obstacle detection method, named the point-map fusion (PMF) algorithm, was proposed. This method integrates point cloud data from the LiDAR, camera, and odometer, along with local grid maps. The PMF algorithm consists of two components: the point-fusion (PF) algorithm, which combines LiDAR point cloud data and camera laser-like point cloud data through a point cloud library (PCL) format conversion and concatenation, and selects the most proximate point cloud to the quadruped robot dog as the valid data; and the map-fusion (MF) algorithm, which incorporates local grid maps acquired using the Gmapping and OctoMap algorithms, leveraging Bayesian estimation theory. The local grid maps obtained by the Gmapping and OctoMap algorithms are denoted as map A and map B, respectively. This sophisticated methodology enables seamless map fusion, which significantly enhances the precision and reliability of the approach. Secondly, a motion distortion removal (MDR) method for LiDAR point cloud data based on odometer readings was proposed. The MDR method utilizes legged odometer data for linear data interpolation of the original distorted LiDAR point cloud data, facilitating the determination of the corresponding pose of the quadruped robot dog. Subsequently, the LiDAR point cloud data are then transformed to the quadruped robot dog coordinate system, efficiently mitigating motion distortion. Experimental results demonstrated that the proposed PMF algorithm achieved a 50% improvement in success rate compared to using only LiDAR or the PF algorithm in isolation, while the MDR algorithm enhanced mapping accuracy by 45.9% when motion distortion was taken into account. The effectiveness of the proposed methods was confirmed through rigorous experimentation.
引用
收藏
页数:19
相关论文
共 50 条
  • [41] Multi-sensor data fusion method to discern point targets
    Li, H
    An, W
    Xu, H
    Sun, ZK
    SIGNAL AND DATA PROCESSING OF SMALL TARGETS 1997, 1997, 3163 : 575 - 582
  • [42] Fault Tolerant Multi-Sensor Fusion for Multi-Robot Collaborative Localization
    Al Hage, Joelle
    El Najjar, Maan E.
    Pomorski, Denis
    2016 IEEE INTERNATIONAL CONFERENCE ON MULTISENSOR FUSION AND INTEGRATION FOR INTELLIGENT SYSTEMS (MFI), 2016, : 272 - 278
  • [43] Multi-sensor missile-borne LiDAR point cloud data augmentation based on Monte Carlo distortion simulation
    Zhao, Luda
    Hu, Yihua
    Han, Fei
    Dou, Zhenglei
    Li, Shanshan
    Zhang, Yan
    Wu, Qilong
    CAAI TRANSACTIONS ON INTELLIGENCE TECHNOLOGY, 2025, 10 (01) : 300 - 316
  • [44] MULTI-SENSOR FUSION FOR SIMULTANEOUS GEOMETRIC-PHYSICS MODELING OF ENVIRONMENT
    Zhang, Su
    Chen, Haoyu
    Zhang, Minchao
    Tan, Kai
    Wang, Haipeng
    Xu, Feng
    IGARSS 2023 - 2023 IEEE INTERNATIONAL GEOSCIENCE AND REMOTE SENSING SYMPOSIUM, 2023, : 5627 - 5630
  • [45] Scene Recognition for Indoor Localization Using a Multi-Sensor Fusion Approach
    Liu, Mengyun
    Chen, Ruizhi
    Li, Deren
    Chen, Yujin
    Guo, Guangyi
    Cao, Zhipeng
    Pan, Yuanjin
    SENSORS, 2017, 17 (12)
  • [46] Modular Multi-Sensor Fusion for Underwater Localization for Autonomous ROV Operations
    Scheiber, Martin
    Cardaillac, Alexandre
    Brommer, Christian
    Weiss, Stephan
    Ludvigsen, Martin
    2022 OCEANS HAMPTON ROADS, 2022,
  • [47] Multi-Sensor Fusion Based Localization System for an Amphibious Spherical Robot
    Liu, Yu
    Guo, Shuxiang
    Shi, Liwei
    Xing, Huiming
    Hou, Xihuan
    Liu, Huikang
    Hu, Yao
    Xia, Debin
    Li, Zan
    2019 IEEE INTERNATIONAL CONFERENCE ON MECHATRONICS AND AUTOMATION (ICMA), 2019, : 2529 - 2534
  • [48] Multi-sensor data fusion for land vehicle localization using RTMAPS
    Abuhadrous, I
    Nashashibi, F
    Laurgeau, C
    Chinchole, M
    IEEE IV2003: INTELLIGENT VEHICLES SYMPOSIUM, PROCEEDINGS, 2003, : 339 - 344
  • [49] A novel independent train localization method based on multi-sensor fusion
    Zhou, Datian
    Tang, Tao
    Rail Transportation 2005, 2005, : 19 - 23
  • [50] Localization technique of pipeline robot based on multi-sensor data fusion
    Department of Electrical Engineer, Harbin Institute of Technology, Harbin 150001, China
    Kongzhi yu Juece Control Decis, 2006, 6 (661-665):