A novel approach to tool condition monitoring based on multi-sensor data fusion imaging and an attention mechanism

被引:23
作者
Zeng, Yunfei [1 ]
Liu, Riliang [1 ,2 ,3 ]
Liu, Xinfeng [1 ]
机构
[1] Shandong Univ, Sch Mech Engn, Jinan 250061, Peoples R China
[2] Shandong Univ, Minist Educ, Key Lab High Efficiency & Clean Mech Manufacture, Jinan 250061, Peoples R China
[3] Shandong Univ, Natl Demonstrat Ctr Expt Mech Engn Educ, Jinan 250061, Peoples R China
关键词
attention mechanism; convolutional neural network; multi sensor data fusion; tool condition monitoring; time series imaging; wear measurement; WEAR; RECOGNITION; NETWORKS;
D O I
10.1088/1361-6501/abea3f
中图分类号
T [工业技术];
学科分类号
08 ;
摘要
Tool wear does great damage to product quality and machining efficiency. In this paper, a new tool condition monitoring (TCM) method based on a multi-sensor data fusion imaging and attention mechanism is proposed to indirectly measure and monitor tool wear. First, the multi-sensor signals collected during the operation of the machine tool are encoded and fused. A novel triangular matrix of angle summation method is proposed to fuse multi-sensor time-series signals at the data layer and to image them as two-dimensional images. The method used here effectively retains the detailed characteristics of the data, and also effectively retains the internal time relationship of the signal. In addition, a deep residual network with a convolution block attention module is used to extract the deep features from the encoded image and identify the wear stage of the tool. The proposed deep-learning network is then applied to TCM. It selects information in the channel and spatial domains based on its attention mechanism, and increases network depth by combining residual blocks to realize deep feature extraction. Compared with traditional machine learning methods, this end-to-end deep learning model does not rely on feature engineering, which requires a lot of expert knowledge and artificial experience. In our experimental study, a public data set was used to train the model, and the results showed that the accuracy of the model proposed in this paper was as high as 94%. To further verify its feasibility and effectiveness, the proposed model was also compared with other methods.
引用
收藏
页数:17
相关论文
共 61 条
[21]  
Graves A, 2013, INT CONF ACOUST SPEE, P6645, DOI 10.1109/ICASSP.2013.6638947
[22]   Deep Residual Learning for Image Recognition [J].
He, Kaiming ;
Zhang, Xiangyu ;
Ren, Shaoqing ;
Sun, Jian .
2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, :770-778
[23]   PERCEPTUAL LINEAR PREDICTIVE (PLP) ANALYSIS OF SPEECH [J].
HERMANSKY, H .
JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA, 1990, 87 (04) :1738-1752
[24]  
Hinton G.E., 2012, Momentum, P599, DOI [DOI 10.1007/978-3-642-35289-832, 10.1007/978-3-642-35289-8_32, DOI 10.1007/978-3-642-35289-8-32]
[25]   An Introductory Survey on Attention Mechanisms in NLP Problems [J].
Hu, Dichao .
INTELLIGENT SYSTEMS AND APPLICATIONS, VOL 2, 2020, 1038 :432-448
[26]  
Hu J, 2018, PROC CVPR IEEE, P7132, DOI [10.1109/TPAMI.2019.2913372, 10.1109/CVPR.2018.00745]
[27]   A model of saliency-based visual attention for rapid scene analysis [J].
Itti, L ;
Koch, C ;
Niebur, E .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 1998, 20 (11) :1254-1259
[28]  
Jaderberg M, 2015, ADV NEUR IN, V28
[29]  
Keogh E. J., 2000, Proceedings. KDD-2000. Sixth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, P285, DOI 10.1145/347090.347153
[30]   Cost-Sensitive Learning of Deep Feature Representations From Imbalanced Data [J].
Khan, Salman H. ;
Hayat, Munawar ;
Bennamoun, Mohammed ;
Sohel, Ferdous A. ;
Togneri, Roberto .
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2018, 29 (08) :3573-3587