Hard-to-Detect Obstacle Mapping by Fusing LIDAR and Depth Camera

被引:4
作者
Jeyabal, Sidharth [1 ,2 ]
Sachinthana, W. K. R. [1 ]
Samarakoon, S. M. P. Bhagya [1 ]
Elara, Mohan Rajesh [1 ]
Sheu, Bing J. [2 ]
机构
[1] Singapore Univ Technol & Design, Engn Prod Dev Pillar, Singapore 487372, Singapore
[2] Chang Gung Univ, Coll Engn, Dept Elect Engn, Taoyuan 330, Taiwan
关键词
Robots; Laser radar; Sensors; Navigation; Cameras; Glass; Sensor fusion; Coverage path planning (CPP); mapping; obstacle detection; robot safety; sensor fusion; ROBOTS; VISION;
D O I
10.1109/JSEN.2024.3409623
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
In the era of autonomy, the integration of intelligent systems capable of navigating and perceiving their surroundings has become ubiquitous. Many sensors have been developed for environmental perceiving, with LIDAR emerging as a preeminent technology for precise obstacle detection. However, LIDAR has inherent limitations, impeding its ability to detect specific obstacles located below the LIDAR's height or penetrating its rays. Typical environments where robots are deployed often contain obstacles, which might cause issues for robot operations, such as collisions and entanglements, leading to performance degradation. This research addresses the identified limitations by recognizing obstacles that traditionally challenge LIDAR's detection capabilities. Objects such as glass, carpets, wires, and ramps have been meticulously identified as hard-to-detect objects by LIDAR (HDOL). YOLOv8 has been used to detect HDOL using a depth camera. HDOL objects are incorporated into the environmental map, circumventing the constraints posed by LIDAR. Furthermore, HDOL-aware coverage path planning (CPP) has been proposed using boustrophedon motion with an A* algorithm to navigate the robot safely in an environment. Real-world experiments have validated the applicability of the proposed method for ensuring robot safety.
引用
收藏
页码:24690 / 24698
页数:9
相关论文
共 50 条
[41]   Motion Guided LiDAR-Camera Self-calibration and Accelerated Depth Upsampling for Autonomous Vehicles [J].
Juan Castorena ;
Gintaras V. Puskorius ;
Gaurav Pandey .
Journal of Intelligent & Robotic Systems, 2020, 100 :1129-1138
[42]   Toward Automatic Subsurface Pipeline Mapping by Fusing a Ground-Penetrating Radar and a Camera [J].
Li, Haifeng ;
Chou, Chieh ;
Fan, Longfei ;
Li, Binbin ;
Wang, Di ;
Song, Dezhen .
IEEE TRANSACTIONS ON AUTOMATION SCIENCE AND ENGINEERING, 2020, 17 (02) :722-734
[43]   Depth Mapping using a Low-Cost Camera Array [J].
Fehrman, Brian ;
McGough, Jeff .
2014 IEEE SOUTHWEST SYMPOSIUM ON IMAGE ANALYSIS AND INTERPRETATION (SSIAI 2014), 2014, :101-104
[44]   A smart obstacle avoiding technology based on depth camera for blind and visually impaired people [J].
He, Jian ;
Song, Xuena ;
Su, Yuhan ;
Xiao, Zhonghua .
CCF TRANSACTIONS ON PERVASIVE COMPUTING AND INTERACTION, 2023, 5 (04) :382-395
[45]   Obstacle Avoidance of a UAV Using Fast Monocular Depth Estimation for a Wide Stereo Camera [J].
Cho, Euihyeon ;
Kim, Hyeongjin ;
Kim, Pyojin ;
Lee, Hyeonbeom .
IEEE TRANSACTIONS ON INDUSTRIAL ELECTRONICS, 2025, 72 (02) :1763-1773
[46]   A smart obstacle avoiding technology based on depth camera for blind and visually impaired people [J].
Jian He ;
Xuena Song ;
Yuhan Su ;
Zhonghua Xiao .
CCF Transactions on Pervasive Computing and Interaction, 2023, 5 :382-395
[47]   AI-Driven Mapping System for Smart Parking Management Applications Using an INS-GNSS-Solid-State LiDAR-Monocular Camera Fusion Engine Empowered by HD Maps [J].
Chiang, Kai-Wei ;
Tsai, Syun ;
Chen, Jou-An ;
Srinara, Surachet ;
Tsai, Meng-Lun ;
Hsieh, Chih-Yun ;
Hung, Jyun-Yang ;
Satirapod, Chalermchon ;
El-Sheimy, Naser .
IEEE OPEN JOURNAL OF INTELLIGENT TRANSPORTATION SYSTEMS, 2025, 6 :995-1008
[48]   Using 2D LiDAR and RGB Camera for Human Agnostic Mapping [J].
Khalid, Fawaz ;
Pickering, James E. ;
Dai, Zhuangzhuang .
2024 29TH INTERNATIONAL CONFERENCE ON AUTOMATION AND COMPUTING, ICAC 2024, 2024, :494-499
[49]   Autonomous Multirotor UAV Search and Landing on Safe Spots Based on Combined Semantic and Depth Information From an Onboard Camera and LiDAR [J].
Lim, Jeonggeun ;
Kim, Myeonggyun ;
Yoo, Hyungwook ;
Lee, Jongho .
IEEE-ASME TRANSACTIONS ON MECHATRONICS, 2024, 29 (05) :3960-3970
[50]   Lane-Level Localization and Mapping in GNSS-Challenged Environments by Fusing Lidar Data and Cellular Pseudoranges [J].
Maaref, Mahdi ;
Khalife, Joe ;
Kassas, Zaher M. .
IEEE TRANSACTIONS ON INTELLIGENT VEHICLES, 2019, 4 (01) :73-89