Application of fusion 2D lidar and binocular vision in robot locating obstacles

被引:6
作者
Shao, Weiwei [1 ]
Zhang, Handong [1 ]
Wu, Yuxiu [1 ]
Sheng, Na [2 ]
机构
[1] Anhui Univ Technol, Maanshan, Peoples R China
[2] Maanshan Univ, Maanshan, Peoples R China
关键词
2D lidar; binocular camera; robot; fusion algorithm; LASER SCANNER; LOCALIZATION; RADAR; CALIBRATION; SYSTEM;
D O I
10.3233/JIFS-189698
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
If robot uses 2D lidar or binocular camera to locate obstacles, there will be some problems such as missing obstacle information or inaccurate obstacle locating, which will affect the normal work of the robot. In order to obtain accurate 3D obstacle information, this paper proposes an algorithm for fusing 2D lidar and binocular vision to complete the obstacle location. In this paper, the depth value of the 2D lidar point cloud is used as a benchmark. By fitting the error equation of the binocular camera point cloud depth value, the depth value of the camera point cloud is modified to obtain an accurate 3D camera point cloud, thereby obtaining an accurate 3D obstacle information. Many experiments have proved that the fusion algorithm of 2D lidar and binocular vision can obtain accurate 3D obstacle information. The method of fusion 2D lidar and binocular vision can approximately achieve the measurement effect of 3D lidar, and the point cloud of obstacles is relatively dense, so the accurate 3D obstacle information can be obtained. This method can reduce the influence of single sensor on the robot locating obstacles, thus completing the accurate locating of obstacles, which is of certain significance to robot navigation.
引用
收藏
页码:4387 / 4394
页数:8
相关论文
共 32 条
  • [1] Baeza-Regalado M, 2019, INT J COMB OPTIM PRO, V10, P21
  • [2] Method for 3-D Scene Reconstruction Using Fused LiDAR and Imagery From a Texel Camera
    Bybee, Taylor C.
    Budge, Scott E.
    [J]. IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2019, 57 (11): : 8879 - 8889
  • [3] A Real-Time Map Refinement Method Using a Multi-Sensor Localization Framework
    Delobel, Laurent
    Aufrere, Romuald
    Debain, Christophe
    Chapuis, Roland
    Chateau, Thierry
    [J]. IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2019, 20 (05) : 1644 - 1658
  • [4] Vision-Based Navigation of Omnidirectional Mobile Robots
    Ferro, Marco
    Paolillo, Antonio
    Cherubini, Andrea
    Vendittelli, Marilena
    [J]. IEEE ROBOTICS AND AUTOMATION LETTERS, 2019, 4 (03) : 2691 - 2698
  • [5] PL-SLAM: A Stereo SLAM System Through the Combination of Points and Line Segments
    Gomez-Ojeda, Ruben
    Moreno, Francisco-Angel
    Zuniga-Noel, David
    Scaramuzza, Davide
    Gonzalez-Jimenez, Javier
    [J]. IEEE TRANSACTIONS ON ROBOTICS, 2019, 35 (03) : 734 - 746
  • [6] Research on multi-camera calibration and point cloud correction method based on three-dimensional calibration object
    Huang, Lin
    Da, Feipeng
    Gai, Shaoyan
    [J]. OPTICS AND LASERS IN ENGINEERING, 2019, 115 : 32 - 41
  • [7] Volumetric error modelling of a stereo vision system for error correction in photogrammetric three-dimensional coordinate metrology
    Isa, Mohammed A.
    Sims-Waterhouse, Danny
    Piano, Samanta
    Leach, Richard
    [J]. PRECISION ENGINEERING-JOURNAL OF THE INTERNATIONAL SOCIETIES FOR PRECISION ENGINEERING AND NANOTECHNOLOGY, 2020, 64 : 188 - 199
  • [8] New Monte Carlo Localization Using Deep Initialization: A Three-Dimensional LiDAR and a Camera Fusion Approach
    Jo, Hyunggi
    Kim, Euntai
    [J]. IEEE ACCESS, 2020, 8 (08): : 74485 - 74496
  • [9] Automotive radar and camera fusion using Generative Adversarial Networks
    Lekic, Vladimir
    Babic, Zdenka
    [J]. COMPUTER VISION AND IMAGE UNDERSTANDING, 2019, 184 : 1 - 8
  • [10] Li XH, 2017, CAAI T INTELL TECHNO, V2, P142, DOI 10.1049/trit.2017.0020