Obstacle detection by multi-sensor fusion of a laser scanner and depth camera

被引:1
作者
Saleem, Zainab [1 ]
Long, Philip [2 ]
Huq, Saif [3 ]
McAfee, Marion [1 ]
机构
[1] Atlantic Technol Univ, Ctr Math Modelling & Intelligent Syst Hlth & Envi, Sligo, Ireland
[2] Atlantic Technol Univ, Dept Mech & Ind Engn, Galway, Ireland
[3] York Coll Penn, Dept Elec & Comp Eng & Comp Sci, Kinsley Engn Ctr, York, PA USA
来源
2023 11TH INTERNATIONAL CONFERENCE ON CONTROL, MECHATRONICS AND AUTOMATION, ICCMA | 2023年
关键词
Multi-Sensor Fusion; 2D LIDAR; Depth Camera; Obstacle Detection;
D O I
10.1109/ICCMA59762.2023.10374970
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Reliable sensor systems are essential to detect and track the human operator in a human-robot collaborative environment. This work proposes a low-cost obstacle warning system by fusing 2D LiDAR and 3D vision data. Some regions of the workspace can be inaccessible to a single sensor, defined as blind spots. Therefore, by using a 2D LiDAR on the robot's base and a vision sensor on top of the workspace, we ensure coverage of the entire workspace of the manipulator. For a more efficient system, first human operator has been detected using an object detection algorithm then the laser points are segmented. Further, to obtain more accurate results, the data from both sensors has been fused using Kalman Filter. This fusion not only provides accurate and fast distance information on the position of a human worker without leaving any blind spots but is also significantly more affordable than the more common 3D LiDAR plus vision approach.
引用
收藏
页码:13 / 18
页数:6
相关论文
共 19 条
[1]   Fusion of laser and visual data for robot motion planning and collision avoidance [J].
Baltzakis, H ;
Argyros, A ;
Trahanias, P .
MACHINE VISION AND APPLICATIONS, 2003, 15 (02) :92-100
[2]   A Review of Visual-LiDAR Fusion based Simultaneous Localization and Mapping [J].
Debeunne, Cesar ;
Vivet, Damien .
SENSORS, 2020, 20 (07)
[3]   Scalable Representation Learning for Long-Term Augmented Reality-Based Information Delivery in Collaborative Human-Robot Perception [J].
Han, Fei ;
Siva, Sriram ;
Zhang, Hao .
VIRTUAL, AUGMENTED AND MIXED REALITY: APPLICATIONS AND CASE STUDIES, VAMR 2019, PT II, 2019, 11575 :47-62
[4]  
Hasan M., 2020, ADV ARTIFICIAL INTEL, P40
[5]   An Improved Method for the Calibration of a 2-D LiDAR With Respect to a Camera by Using a Checkerboard Target [J].
Itami, Fumio ;
Yamazaki, Takaharu .
IEEE SENSORS JOURNAL, 2020, 20 (14) :7906-7917
[6]   Human-aware Robot Navigation in Logistics Warehouses [J].
Kenk, Mourad A. ;
Hassaballah, M. ;
Brethe, Jean-Francois .
ICINCO: PROCEEDINGS OF THE 16TH INTERNATIONAL CONFERENCE ON INFORMATICS IN CONTROL, AUTOMATION AND ROBOTICS, VOL 2, 2019, :371-378
[7]  
Labbe Roger, 2014, Kalman and Bayesian filters in Python
[8]   A convolutional neural network-based multi-sensor fusion approach for in-situ quality monitoring of selective laser melting [J].
Li, Jingchang ;
Zhou, Qi ;
Cao, Longchao ;
Wang, Yanzhi ;
Hu, Jiexiang .
JOURNAL OF MANUFACTURING SYSTEMS, 2022, 64 :429-442
[9]  
Mohapatra S, 2022, Arxiv, DOI arXiv:2111.04875
[10]  
Mulyanto A., 2020, JOIV Int. J. Inform. Vis, V4, P231, DOI [10.30630/joiv.4.4.466, DOI 10.30630/JOIV.4.4.466]