Advancing Simultaneous Localization and Mapping with Multi-Sensor Fusion and Point Cloud De-Distortion

被引:1
|
作者
Shao, Haiyan [1 ]
Zhao, Qingshuai [1 ]
Chen, Hongtang [1 ]
Yang, Weixin [2 ]
Chen, Bin [3 ]
Feng, Zhiquan [4 ]
Zhang, Jinkai [1 ]
Teng, Hao [1 ]
机构
[1] Univ Jinan, Sch Mech Engn, Jinan 250022, Peoples R China
[2] Univ Nevada, Sch Elect & Biomed Engn, Reno, NV 89557 USA
[3] Shandong Youbaote Intelligent Robot Co Ltd, Jinan 250098, Peoples R China
[4] Univ Jinan, Sch Informat Sci & Engn, Jinan 250022, Peoples R China
关键词
obstacle detection; motion distortion; error compensation; simultaneous localization and mapping (SLAM); LiDAR; depth camera; quadruped robot dog; ROBOTS;
D O I
10.3390/machines11060588
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
This study addresses the challenges associated with incomplete or missing information in obstacle detection methods that employ a single sensor. Additionally, it tackles the issue of motion distortion in LiDAR point cloud data during synchronization and mapping in complex environments. The research introduces two significant contributions. Firstly, a novel obstacle detection method, named the point-map fusion (PMF) algorithm, was proposed. This method integrates point cloud data from the LiDAR, camera, and odometer, along with local grid maps. The PMF algorithm consists of two components: the point-fusion (PF) algorithm, which combines LiDAR point cloud data and camera laser-like point cloud data through a point cloud library (PCL) format conversion and concatenation, and selects the most proximate point cloud to the quadruped robot dog as the valid data; and the map-fusion (MF) algorithm, which incorporates local grid maps acquired using the Gmapping and OctoMap algorithms, leveraging Bayesian estimation theory. The local grid maps obtained by the Gmapping and OctoMap algorithms are denoted as map A and map B, respectively. This sophisticated methodology enables seamless map fusion, which significantly enhances the precision and reliability of the approach. Secondly, a motion distortion removal (MDR) method for LiDAR point cloud data based on odometer readings was proposed. The MDR method utilizes legged odometer data for linear data interpolation of the original distorted LiDAR point cloud data, facilitating the determination of the corresponding pose of the quadruped robot dog. Subsequently, the LiDAR point cloud data are then transformed to the quadruped robot dog coordinate system, efficiently mitigating motion distortion. Experimental results demonstrated that the proposed PMF algorithm achieved a 50% improvement in success rate compared to using only LiDAR or the PF algorithm in isolation, while the MDR algorithm enhanced mapping accuracy by 45.9% when motion distortion was taken into account. The effectiveness of the proposed methods was confirmed through rigorous experimentation.
引用
收藏
页数:19
相关论文
共 50 条
  • [31] Multi-Sensor Data Fusion Technologies for Blanket Jamming Localization
    王菊
    吴嗣亮
    曾涛
    Journal of Beijing Institute of Technology(English Edition), 2005, (01) : 22 - 26
  • [32] Mobile robot localization by multi-sensor fusion and scene matching
    Yang, YB
    Tsui, HT
    INTELLIGENT ROBOTS AND COMPUTER VISION XV: ALGORITHMS, TECHNIQUES, ACTIVE VISION, AND MATERIALS HANDLING, 1996, 2904 : 298 - 309
  • [33] Localization of Autonomous Cars using Multi-sensor Data Fusion
    Wang, Xiaohua
    Lian, Yanru
    Li, Li
    2018 CHINESE AUTOMATION CONGRESS (CAC), 2018, : 4152 - 4155
  • [34] Robot Localization in Indoor and Outdoor Environments by Multi-sensor Fusion
    Yousuf, Sofia
    Kadri, Muhammad Bilal
    2018 14TH INTERNATIONAL CONFERENCE ON EMERGING TECHNOLOGIES (ICET), 2018,
  • [35] A Flying Robot Localization Method Based on Multi-sensor Fusion
    Liu, Changan
    Zhang, Sheng
    Wu, Hua
    Dong, Ruifang
    INTERNATIONAL JOURNAL OF ADVANCED ROBOTIC SYSTEMS, 2014, 11
  • [36] Adaptive Multi-Sensor Fusion Localization Method Based on Filtering
    Wang, Zhihong
    Bai, Yuntian
    Hu, Jie
    Tang, Yuxuan
    Cheng, Fei
    MATHEMATICS, 2024, 12 (14)
  • [37] LatentSLAM: unsupervised multi-sensor representation learning for localization and mapping
    Catal, Ozan
    Jansen, Wouter
    Verbelen, Tim
    Dhoedt, Bart
    Steckel, Jan
    2021 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA 2021), 2021, : 6739 - 6745
  • [38] Multi-sensor Fusion Glass Detection for Robot Navigation and Mapping
    Wei, Hao
    Li, Xue-en
    Shi, Ying
    You, Bo
    Xu, Yi
    2018 WRC SYMPOSIUM ON ADVANCED ROBOTICS AND AUTOMATION (WRC SARA), 2018, : 184 - 188
  • [39] Multi-sensor data fusion for seafloor mapping and ordnance location
    Wright, J
    Scott, K
    Chao, TH
    Lau, B
    PROCEEDINGS OF THE 1996 SYMPOSIUM ON AUTONOMOUS UNDERWATER VEHICLE TECHNOLOGY, 1996, : 167 - 175
  • [40] WiFi access point localization in urban multi-sensor environment
    Tomas, Boris
    Posaric, Lovro
    CENTRAL EUROPEAN CONFERENCE ON INFORMATION AND INTELLIGENT SYSTEMS, CECIIS 2022, 2022, : 443 - 448