Hard-to-Detect Obstacle Mapping by Fusing LIDAR and Depth Camera

被引:2
作者
Jeyabal, Sidharth [1 ,2 ]
Sachinthana, W. K. R. [1 ]
Samarakoon, S. M. P. Bhagya [1 ]
Elara, Mohan Rajesh [1 ]
Sheu, Bing J. [2 ]
机构
[1] Singapore Univ Technol & Design, Engn Prod Dev Pillar, Singapore 487372, Singapore
[2] Chang Gung Univ, Coll Engn, Dept Elect Engn, Taoyuan 330, Taiwan
关键词
Robots; Laser radar; Sensors; Navigation; Cameras; Glass; Sensor fusion; Coverage path planning (CPP); mapping; obstacle detection; robot safety; sensor fusion; ROBOTS; VISION;
D O I
10.1109/JSEN.2024.3409623
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
In the era of autonomy, the integration of intelligent systems capable of navigating and perceiving their surroundings has become ubiquitous. Many sensors have been developed for environmental perceiving, with LIDAR emerging as a preeminent technology for precise obstacle detection. However, LIDAR has inherent limitations, impeding its ability to detect specific obstacles located below the LIDAR's height or penetrating its rays. Typical environments where robots are deployed often contain obstacles, which might cause issues for robot operations, such as collisions and entanglements, leading to performance degradation. This research addresses the identified limitations by recognizing obstacles that traditionally challenge LIDAR's detection capabilities. Objects such as glass, carpets, wires, and ramps have been meticulously identified as hard-to-detect objects by LIDAR (HDOL). YOLOv8 has been used to detect HDOL using a depth camera. HDOL objects are incorporated into the environmental map, circumventing the constraints posed by LIDAR. Furthermore, HDOL-aware coverage path planning (CPP) has been proposed using boustrophedon motion with an A* algorithm to navigate the robot safely in an environment. Real-world experiments have validated the applicability of the proposed method for ensuring robot safety.
引用
收藏
页码:24690 / 24698
页数:9
相关论文
共 50 条
  • [21] Unifying Obstacle Detection, Recognition, and Fusion Based on the Polarization Color Stereo Camera and LiDAR for the ADAS
    Long, Ningbo
    Yan, Han
    Wang, Liqiang
    Li, Haifeng
    Yang, Qing
    [J]. SENSORS, 2022, 22 (07)
  • [22] Obstacle detection based on depth fusion of lidar and radar in challenging conditions
    Xie, Guotao
    Zhang, Jing
    Tang, Junfeng
    Zhao, Hongfei
    Sun, Ning
    Hu, Manjiang
    [J]. INDUSTRIAL ROBOT-THE INTERNATIONAL JOURNAL OF ROBOTICS RESEARCH AND APPLICATION, 2021, 48 (06): : 792 - 802
  • [23] Enhancing Off-Road Topography Estimation by Fusing LIDAR and Stereo Camera Data with Interpolated Ground Plane
    Sten, Gustav
    Feng, Lei
    Moller, Bjorn
    [J]. SENSORS, 2025, 25 (02)
  • [24] Fast Multiple Objects Detection and Tracking Fusing Color Camera and 3D LIDAR for Intelligent Vehicles
    Hwang, Soonmin
    Kim, Namil
    Choi, Yukyung
    Lee, Seokju
    Kweon, In So
    [J]. 2016 13TH INTERNATIONAL CONFERENCE ON UBIQUITOUS ROBOTS AND AMBIENT INTELLIGENCE (URAI), 2016, : 234 - 239
  • [25] Dense Depth-Map Estimation Based on Fusion of Event Camera and Sparse LiDAR
    Cui, Mingyue
    Zhu, Yuzhang
    Liu, Yechang
    Liu, Yunchao
    Chen, Gang
    Huang, Kai
    [J]. IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, 2022, 71
  • [26] Probabilistic multi-modal depth estimation based on camera–LiDAR sensor fusion
    Johan S. Obando-Ceron
    Victor Romero-Cano
    Sildomar Monteiro
    [J]. Machine Vision and Applications, 2023, 34
  • [27] Path following and obstacle avoidance for an autonomous UAV using a depth camera
    Iacono, Massimiliano
    Sgorbissa, Antonio
    [J]. ROBOTICS AND AUTONOMOUS SYSTEMS, 2018, 106 : 38 - 46
  • [28] Enhanced Obstacle Detection in Autonomous Vehicles Using 3D LiDAR Mapping Techniques
    Tokgoz, Muhammed Enes
    Yusefi, Abdullah
    Toy, Ibrahim
    Durdu, Akif
    [J]. 2024 23RD INTERNATIONAL SYMPOSIUM INFOTEH-JAHORINA, INFOTEH, 2024,
  • [29] Probabilistic multi-modal depth estimation based on camera-LiDAR sensor fusion
    Obando-Ceron, Johan S.
    Romero-Cano, Victor
    Monteiro, Sildomar
    [J]. MACHINE VISION AND APPLICATIONS, 2023, 34 (05)
  • [30] Scene flow estimation by depth map upsampling and layer assignment for camera-LiDAR system
    Zou, Cheng
    He, Bingwei
    Zhu, Mingzhu
    Zhang, Liwei
    Zhang, Jianwei
    [J]. JOURNAL OF VISUAL COMMUNICATION AND IMAGE REPRESENTATION, 2019, 64