Degradation stage classification via interpretable feature learning

被引:18
作者
Alfeo, Antonio L. L. [1 ]
Cimino, Mario G. C. A. [1 ]
Vaglini, Gigliola [1 ]
机构
[1] Univ Pisa, Dept Informat Engn, Largo L Lazzarino 1, Pisa, Italy
关键词
Deep learning; Feature learning; Interpretable machine learning; Explainable artificial intelligence; Autoencoder; Predictive maintenance; PREDICTIVE MAINTENANCE; FAULT-DIAGNOSIS; FEATURE FUSION; DENOISING AUTOENCODERS; DEEP AUTOENCODER; LIFE PREDICTION; FRAMEWORK; MACHINES; NETWORK;
D O I
10.1016/j.jmsy.2021.05.003
中图分类号
T [工业技术];
学科分类号
08 ;
摘要
Predictive maintenance (PdM) advocates for the usage of machine learning technologies to monitor asset's health conditions and plan maintenance activities accordingly. However, according to the specific degradation process, some health-related measures (e.g. temperature) may be not informative enough to reliably assess the health stage. Moreover, each measure needs to be properly treated to extract the information linked to the health stage. Those issues are usually addressed by performing a manual feature engineering, which results in high management cost and poor generalization capability of those approaches. In this work, we address this issue by coupling a health stage classifier with a feature learning mechanism. With feature learning, minimally processed data are automatically transformed into informative features. Many effective feature learning approaches are based on deep learning. With those, the features are obtained as a non-linear combination of the inputs, thus it is difficult to understand the input's contribution to the classification outcome and so the reasoning behind the model. Still, these insights are increasingly required to interpret the results and assess the reliability of the model. In this regard, we propose a feature learning approach able to (i) effectively extract high-quality features by processing different input signals, and (ii) provide useful insights about the most informative domain transformations (e.g. Fourier transform or probability density function) of the input signals (e.g. vibration or temperature). The effectiveness of the proposed approach is tested with publicly available real-world datasets about bearings' progressive deterioration and compared with the traditional feature engineering approach.
引用
收藏
页码:972 / 983
页数:12
相关论文
共 85 条
[1]  
Alfeo Antonio L., 2016, Proceedings of the 5th International Conference on Pattern Recognition Applications and Methods (ICPRAM 2016), P497
[2]   Using an autoencoder in the design of an anomaly detector for smart manufacturing [J].
Alfeo, Antonio L. ;
Cimino, Mario G. C. A. ;
Manco, Giuseppe ;
Ritacco, Ettore ;
Vaglini, Gigliola .
PATTERN RECOGNITION LETTERS, 2020, 136 :272-278
[3]  
Alvarez-Melis D., 2018, On the Robustness of Interpretability Methods
[4]  
Amruthnath N, 2018, 2018 5TH INTERNATIONAL CONFERENCE ON INDUSTRIAL ENGINEERING AND APPLICATIONS (ICIEA), P355, DOI 10.1109/IEA.2018.8387124
[5]  
Nguyen A, 2015, PROC CVPR IEEE, P427, DOI 10.1109/CVPR.2015.7298640
[6]  
[Anonymous], 2008, FEATURE EXTRACTION F
[7]   Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI [J].
Barredo Arrieta, Alejandro ;
Diaz-Rodriguez, Natalia ;
Del Ser, Javier ;
Bennetot, Adrien ;
Tabik, Siham ;
Barbado, Alberto ;
Garcia, Salvador ;
Gil-Lopez, Sergio ;
Molina, Daniel ;
Benjamins, Richard ;
Chatila, Raja ;
Herrera, Francisco .
INFORMATION FUSION, 2020, 58 :82-115
[8]  
Bechhoefer E., 2013, Annual Conference of the Prognostics and Health Management Society, P1
[9]   Representation Learning: A Review and New Perspectives [J].
Bengio, Yoshua ;
Courville, Aaron ;
Vincent, Pascal .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2013, 35 (08) :1798-1828
[10]   An analysis on the use of autoencoders for representation learning: Fundamentals, learning task case studies, explainability and challenges [J].
Charte, David ;
Charte, Francisco ;
del Jesus, Maria J. ;
Herrera, Francisco .
NEUROCOMPUTING, 2020, 404 :93-107