Visual perception system design for rock breaking robot based on multi-sensor fusion

被引:6
作者
Li, Jinguang [1 ]
Liu, Yu [1 ]
Wang, Shuai [1 ]
Wang, Linwei [1 ]
Sun, Yumeng [1 ]
Li, Xin [1 ]
机构
[1] Northeastern Univ, Sch Mech Engn & Automat, Shenyang 110819, Peoples R China
基金
中国国家自然科学基金;
关键词
Rock breaking; Visual perception; Multi-sensor fusion; 3D point cloud; MODEL; SHAPE; LIDAR;
D O I
10.1007/s11042-023-16189-w
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
In recent years, mining automation has received significant attention as a critical focus area. Rock breaking robots are commonly used equipment in the mining industry, and their automation requires an accurate and fast visual perception system. Currently, rock detection and determination of rock breaking surfaces heavily rely on operator experience. To address this, this paper leverages multi-sensor fusion techniques, specifically camera and lidar fusion, as the perception system for the rock breaking robot. The advanced PP-YOLO series algorithm is employed for 2D detection, enabling the generation of specific detection results based on the breaking requirements. Furthermore, 3D reconstruction of rocks detected in the 2D area is performed using point cloud data. The extraction of rock breaking surfaces is achieved through point cloud segmentation and statistical filtering methods. Experimental results demonstrate a rock detection speed of 13.8 ms and the mAP value of 91.2%. The segmentation accuracy for rock breaking surfaces is 75.46%, with an average recall of 91.08%. The segmentation process takes 73.09 ms, thus meeting the real-time detection and segmentation needs within the specified rock breaking range. This study effectively addresses the limitations associated with single sensor information.
引用
收藏
页码:24795 / 24814
页数:20
相关论文
共 36 条
[1]   Part based model and spatial-temporal reasoning to recognize hydraulic excavators in construction images and videos [J].
Azar, Ehsan Rezazadeh ;
McCabe, Brenda .
AUTOMATION IN CONSTRUCTION, 2012, 24 :194-202
[2]  
Benet B, 2017, Advances in Animal Biosciences, V8, P583, DOI [10.1017/s2040470017000310, 10.1017/s2040470017000310, DOI 10.1017/S2040470017000310, 10.1017/S2040470017000310]
[3]   High resolution multisensor fusion of SAR, optical and LiDAR data based on crisp vs. fuzzy and feature vs. decision ensemble systems [J].
Bigdeli, Behnaz ;
Pahlavani, Parham .
INTERNATIONAL JOURNAL OF APPLIED EARTH OBSERVATION AND GEOINFORMATION, 2016, 52 :126-136
[4]  
Bochkovskiy A, 2020, Arxiv, DOI [arXiv:2004.10934, DOI 10.48550/ARXIV.2004.10934]
[5]  
Bonchis A., 2016, 18 IFAC WORLD C, V44, P11593, DOI DOI 10.3182/20110828-6-IT-1002.00536
[6]   A deep and handcrafted features-based framework for diagnosis of COVID-19 from chest x-ray images [J].
Bozkurt, Ferhat .
CONCURRENCY AND COMPUTATION-PRACTICE & EXPERIENCE, 2022, 34 (05)
[7]  
Bureau of Labor Statistics, 2015, CENS FAT OCC INJ CFO
[8]  
Dai JF, 2017, Arxiv, DOI arXiv:1703.06211
[9]   Deformable Convolutional Networks [J].
Dai, Jifeng ;
Qi, Haozhi ;
Xiong, Yuwen ;
Li, Yi ;
Zhang, Guodong ;
Hu, Han ;
Wei, Yichen .
2017 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2017, :764-773
[10]   Automated Rock Detection and Shape Analysis from Mars Rover Imagery and 3D Point Cloud Data [J].
Di, Kaichang ;
Yue, Zongyu ;
Liu, Zhaoqin ;
Wang, Shuliang .
JOURNAL OF EARTH SCIENCE, 2013, 24 (01) :125-135