Automated visual inspection of target parts for train safety based on deep learning

被引:19
作者
Zhou, Fuqiang [1 ]
Song, Ya [1 ]
Liu, Liu [2 ]
Zheng, Dongtian [1 ]
机构
[1] Beihang Univ, Sch Instrumentat Sci & Optoelect Engn, Xueyuan Rd 37, Beijing 100191, Peoples R China
[2] Beijing Inst Aerosp Control Devices, Yongding Rd 52, Beijing 100854, Peoples R China
关键词
learning (artificial intelligence); feature extraction; data mining; railway safety; traffic engineering computing; inspection; neural nets; automated visual inspection; target parts; train safety; deep learning; inspection accuracy; image recognition; autonomous information mining; stacked auto-encoder convolutional neural network; composite neural network; SAE-CNN; training efficiency; centre plate bolts; moving freight car; CLASSIFICATION;
D O I
10.1049/iet-its.2016.0338
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Visual inspection of target parts is a common approach to ensuring train safety. However, some key parts, such as fastening bolts, do not possess sufficient feature information, because they are usually small, polluted, or obscured. These factors affect inspection accuracy and can lead to serious accidents. Therefore, traditional visual inspection relying on feature extraction cannot always meet the requirements of high-accuracy inspection. Deep learning has considerable advantages in image recognition for autonomous information mining, but it requires a considerable amount of computation. To resolve the issues mentioned above, this study proposes a method that combines traditional visual inspection with deep learning. Traditional feature extraction is used to locate the targets approximately, which makes the deep learning purposeful and efficient. A composite neural network, stacked auto-encoder convolutional neural network (SAE-CNN), is provided to further improve the training efficiency. A SAE is added to a CNN so that the network can obtain optimum results faster and more accurately. Taking the inspection of centre plate bolts in a moving freight car as an example, the overall system and specific processes are described. The study results showed satisfactory accuracy. A related analysis and comparative experiment were also conducted.
引用
收藏
页码:550 / 555
页数:6
相关论文
共 22 条
[1]  
[Anonymous], IEEE T BIG DATA
[2]   Lung Pattern Classification for Interstitial Lung Diseases Using a Deep Convolutional Neural Network [J].
Anthimopoulos, Marios ;
Christodoulidis, Stergios ;
Ebner, Lukas ;
Christe, Andreas ;
Mougiakakou, Stavroula .
IEEE TRANSACTIONS ON MEDICAL IMAGING, 2016, 35 (05) :1207-1216
[3]   Fast Image Reconstruction With L2-Regularization [J].
Bilgic, Berkin ;
Chatnuntawech, Itthi ;
Fan, Audrey P. ;
Setsompop, Kawin ;
Cauley, Stephen F. ;
Wald, Lawrence L. ;
Adalsteinsson, Elfar .
JOURNAL OF MAGNETIC RESONANCE IMAGING, 2014, 40 (01) :181-191
[4]  
Bouvrie J., 2006, NOTES CONVOLUTIONAL
[5]   Histograms of oriented gradients for human detection [J].
Dalal, N ;
Triggs, B .
2005 IEEE COMPUTER SOCIETY CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, VOL 1, PROCEEDINGS, 2005, :886-893
[6]   Automatic Fastener Classification and Defect Detection in Vision-Based Railway Inspection Systems [J].
Feng, Hao ;
Jiang, Zhiguo ;
Xie, Fengying ;
Yang, Ping ;
Shi, Jun ;
Chen, Long .
IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, 2014, 63 (04) :877-888
[7]   Single Sample Face Recognition via Learning Deep Supervised Autoencoders [J].
Gao, Shenghua ;
Zhang, Yuting ;
Jia, Kui ;
Lu, Jiwen ;
Zhang, Yingying .
IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2015, 10 (10) :2108-2118
[8]  
Jiaojiao Zhao, 2014, 2014 International Joint Conference on Neural Networks (IJCNN), P411, DOI 10.1109/IJCNN.2014.6889510
[9]   Human Detection and Activity Classification Based on Micro-Doppler Signatures Using Deep Convolutional Neural Networks [J].
Kim, Youngwook ;
Moon, Taesup .
IEEE GEOSCIENCE AND REMOTE SENSING LETTERS, 2016, 13 (01) :8-12
[10]   Vision-based fault inspection of small mechanical components for train safety [J].
Liu, Liu ;
Zhou, Fuqiang ;
He, Yuzhu .
IET INTELLIGENT TRANSPORT SYSTEMS, 2016, 10 (02) :130-139