Fine-grained identification of vehicle loads on bridges based on computer vision

被引:14
作者
Zhu, Jinsong [1 ]
Li, Xingtian [1 ,2 ]
Zhang, Chi [1 ]
机构
[1] Tianjin Univ, Sch Civil Engn, Key Lab Coast Civil Struct Safety, Minist Educ, Tianjin, Peoples R China
[2] Lanzhou Jiaotong Univ, Sch Civil Engn, Lanzhou, Peoples R China
基金
中国国家自然科学基金;
关键词
Bridge; Computer vision; Fine-grained; Identification; Vehicle loads; WEIGH-IN-MOTION; OBTAINING SPATIOTEMPORAL INFORMATION; MOVING FORCE IDENTIFICATION; SYSTEM;
D O I
10.1007/s13349-022-00552-w
中图分类号
TU [建筑科学];
学科分类号
0813 ;
摘要
Vehicle load is important for condition assessment, maintenance and reinforcement of bridges. In recent years, computer vision technology has been applied to the identification of vehicle loads on bridges. According to the latest reports, the vehicle can be detected as a whole to get its spatiotemporal information. However, the wheelbase and number of axles of vehicle loads are difficult to obtain accurately. In addition, the high cost of WIM equipment makes it difficult to be used in practice. Therefore, it is necessary to explore an economical identification approach that can provide more complete and accurate data for subsequent mechanics analysis and other decisions making. In this paper, an approach for fine-grained identification of vehicle loads on bridges was proposed. Based on the deep convolutional neural networks, a vehicle detector was obtained to achieve the detection of vehicles and tires at two different scales. Using the results from vehicle detection and camera calibration, an accurate 3D bounding box reconstruction algorithm was proposed to obtain the vehicle sizes, position, wheelbase and axle number. Then the vehicle was tracked using the optimized Kalman filter algorithm and the trajectory and speed were obtained. Finally, the gross vehicle weight and axle weight were obtained according to the axle information and statistical distribution model of the vehicle weight. To test the accuracy and reliability, the algorithm for vehicle load identification was developed and tested on a bridge in operation, and the results demonstrated that it was capable of identify vehicle loads at the fine-grained level.
引用
收藏
页码:427 / 446
页数:20
相关论文
共 61 条
[1]   A novel video-vibration monitoring system for walking pattern identification on floors [J].
Abdeljaber, Osama ;
Hussein, Mohammed ;
Avci, Onur ;
Davis, Brad ;
Reynolds, Paul .
ADVANCES IN ENGINEERING SOFTWARE, 2020, 139
[2]   A systematic study of the class imbalance problem in convolutional neural networks [J].
Buda, Mateusz ;
Maki, Atsuto ;
Mazurowski, Maciej A. .
NEURAL NETWORKS, 2018, 106 :249-259
[3]   Sensor Networks, Computer Imaging, and Unit Influence Lines for Structural Health Monitoring: Case Study for Bridge Load Rating [J].
Catbas, F. Necati ;
Zaurin, Ricardo ;
Gul, Mustafa ;
Gokce, Hasan Burak .
JOURNAL OF BRIDGE ENGINEERING, 2012, 17 (04) :662-670
[4]  
Cebon D, 1987, S HEAVY VEHICLE SUSP, P143
[5]   A computer vision approach for the load time history estimation of lively individuals and crowds [J].
Celik, Ozan ;
Dong, Chuan-Zhi ;
Catbas, F. Necati .
COMPUTERS & STRUCTURES, 2018, 200 :32-52
[6]   An interpretive method for moving force identification [J].
Chan, THT ;
Law, SS ;
Yung, TH ;
Yuan, XR .
JOURNAL OF SOUND AND VIBRATION, 1999, 219 (03) :503-524
[7]   Development of a Bridge Weigh-in-Motion System Based on Long-Gauge Fiber Bragg Grating Sensors [J].
Chen, Shi-Zhi ;
Wu, Gang ;
Feng, De-Cheng ;
Zhang, Lu .
JOURNAL OF BRIDGE ENGINEERING, 2018, 23 (09)
[8]  
Chen X., 2017, PROC CVPR IEEE, V1, P3, DOI DOI 10.1109/CVPR.2017.691
[9]   Monocular 3D Object Detection for Autonomous Driving [J].
Chen, Xiaozhi ;
Kundu, Kaustav ;
Zhang, Ziyu ;
Ma, Huimin ;
Fidler, Sanja ;
Urtasun, Raquel .
2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, :2147-2156
[10]   Identification of spatio-temporal distribution of vehicle loads on long-span bridges using computer vision technology [J].
Chen, Zhicheng ;
Li, Hui ;
Bao, Yuequan ;
Li, Na ;
Jin, Yao .
STRUCTURAL CONTROL & HEALTH MONITORING, 2016, 23 (03) :517-534