Detecting Construction Equipment Using a Region-Based Fully Convolutional Network and Transfer Learning

被引:130
作者
Kim, Hongjo [1 ]
Kim, Hyoungkwan [1 ]
Hong, Yong Won [2 ]
Byun, Hyeran [2 ]
机构
[1] Yonsei Univ, Sch Civil & Environm Engn, 50 Yonsei Ro, Seoul 03722, South Korea
[2] Yonsei Univ, Dept Comp Sci, 50 Yonsei Ro, Seoul 03722, South Korea
基金
新加坡国家研究基金会;
关键词
Construction site monitoring; Object detection; Convolutional networks; Benchmark data set; ACTION RECOGNITION; PROGRESS; WORKERS; TRACKING; IDENTIFICATION; PHOTOGRAMMETRY; BIM;
D O I
10.1061/(ASCE)CP.1943-5487.0000731
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
For proper construction site management and plan revisions during construction, it is necessary to understand a construction site's status in real time. Many vision-based construction site-monitoring methods exist, but current technology has not achieved the accuracy required to robustly recognize objects such as construction equipment, workers, and materials in actual jobsite images. To address this issue, this paper proposes a deep convolutional network-based construction object-detection method to accurately recognize construction equipment. A deep convolutional network can achieve high performance in various visual tasks, but is not easy to be applied in the construction industry where there is not enough publicly available data for training. This problem is solved by transfer learning, which trains a model for the construction industry by transferring the knowledge of models trained in other domains with a large amount of training data. To evaluate the proposed method, a benchmark data set is created for five classes: a dump truck, excavator, loader, concrete mixer truck, and road roller. This benchmark data set includes various shapes and poses for each class to evaluate the generalization performance of the proposed construction equipment detection model. Experimental results show that the proposed method performs remarkably well, achieving 96.33% mean average precision. In the future, the proposed model can be used to infer the context of construction operations for producing managerial information such as progress, productivity, and safety. (c) 2017 American Society of Civil Engineers.
引用
收藏
页数:15
相关论文
共 82 条
[41]  
Kim H., 2017, P JOINT C COMP CONST, V1, P517
[42]   Data-driven scene parsing method for recognizing construction site objects in the whole image [J].
Kim, Hongjo ;
Kim, Kinam ;
Kim, Hyoungkwan .
AUTOMATION IN CONSTRUCTION, 2016, 71 :271-282
[43]   Vision-Based Object-Centric Safety Assessment Using Fuzzy Inference: Monitoring Struck-By Accidents with Moving Objects [J].
Kim, Hongjo ;
Kim, Kinam ;
Kim, Hyoungkwan .
JOURNAL OF COMPUTING IN CIVIL ENGINEERING, 2016, 30 (04)
[44]  
Kim J, 2017, J COMPUT CIVIL ENG, V31, DOI [10.1061/(ASCE)CP.1943-5487.0000677, 10.1061/(ASCE)CP.1943-5487.0000731]
[45]   Image-based construction hazard avoidance system using augmented reality in wearable device [J].
Kim, Kinam ;
Kim, Hongjo ;
Kim, Hyoungkwan .
AUTOMATION IN CONSTRUCTION, 2017, 83 :390-403
[46]   ImageNet Classification with Deep Convolutional Neural Networks [J].
Krizhevsky, Alex ;
Sutskever, Ilya ;
Hinton, Geoffrey E. .
COMMUNICATIONS OF THE ACM, 2017, 60 (06) :84-90
[47]   Backpropagation Applied to Handwritten Zip Code Recognition [J].
LeCun, Y. ;
Boser, B. ;
Denker, J. S. ;
Henderson, D. ;
Howard, R. E. ;
Hubbard, W. ;
Jackel, L. D. .
NEURAL COMPUTATION, 1989, 1 (04) :541-551
[48]  
Li Yi, 2017, IEEE C COMPUT VIS PA, P2359, DOI [DOI 10.1109/CVPR.2017.472, DOI 10.48550/ARXIV.1704.03135]
[49]   Optical marker-based end effector pose estimation for articulated excavators [J].
Lundeen, Kurt M. ;
Dong, Suyang ;
Fredricks, Nicholas ;
Akula, Manu ;
Seo, Jongwon ;
Kamat, Vineet R. .
AUTOMATION IN CONSTRUCTION, 2016, 65 :51-64
[50]  
Mallat S., 1999, A Wavelet Tour of Signal Processing, DOI DOI 10.1016/B978-0-12-374370-1.X0001-8