DLformer: A Dynamic Length Transformer-Based Network for Efficient Feature Representation in Remaining Useful Life Prediction

被引:37
作者
Ren, Lei [1 ,2 ]
Wang, Haiteng [1 ]
Huang, Gao [3 ]
机构
[1] Beihang Univ, Sch Automat Sci & Elect Engn, Beijing 100191, Peoples R China
[2] Zhongguancun Lab, Beijing 100094, Peoples R China
[3] Tsinghua Univ, Dept Automat, Beijing 100084, Peoples R China
基金
美国国家科学基金会;
关键词
Transformers; Feature extraction; Maintenance engineering; Time series analysis; Computational modeling; Adaptation models; Task analysis; Adaptive inference; deep learning; feature representation; interpretability; remaining useful life (RUL) prediction; PROGNOSTICS;
D O I
10.1109/TNNLS.2023.3257038
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Representation learning-based remaining useful life (RUL) prediction plays a crucial role in improving the security and reducing the maintenance cost of complex systems. Despite the superior performance, the high computational cost of deep networks hinders deploying the models on low-compute platforms. A significant reason for the high cost is the computation of representing long sequences. In contrast to most RUL prediction methods that learn features of the same sequence length, we consider that each time series has its characteristics and the sequence length should be adjusted adaptively. Our motivation is that an "easy" sample with representative characteristics can be correctly predicted even when short feature representation is provided, while "hard" samples need complete feature representation. Therefore, we focus on sequence length and propose a dynamic length transformer (DLformer) that can adaptively learn sequence representation of different lengths. Then, a feature reuse mechanism is developed to utilize previously learned features to reduce redundant computation. Finally, in order to achieve dynamic feature representation, a particular confidence strategy is designed to calculate the confidence level for the prediction results. Regarding interpretability, the dynamic architecture can help human understand which part of the model is activated. Experiments on multiple datasets show that DLformer can increase up to 90% inference speed, with less than 5% degradation in model accuracy.
引用
收藏
页码:5942 / 5952
页数:11
相关论文
共 31 条
[1]   A Review of Artificial Intelligence Methods for Condition Monitoring and Fault Diagnosis of Rolling Element Bearings for Induction Motor [J].
AlShorman, Omar ;
Irfan, Muhammad ;
Saad, Nordin ;
Zhen, D. ;
Haider, Noman ;
Glowacz, Adam ;
AlShorman, Ahmad .
SHOCK AND VIBRATION, 2020, 2020
[2]  
[Anonymous], 2015, ACS SYM SER
[3]   Aircraft Engine Run-to-Failure Dataset under Real Flight Conditions for Prognostics and Diagnostics [J].
Arias Chao, Manuel ;
Kulkarni, Chetan ;
Goebel, Kai ;
Fink, Olga .
DATA, 2021, 6 (01) :1-14
[4]  
Chen GB, 2017, ADV NEUR IN, V30
[5]  
Cheng WH, 2016, 2016 IEEE INTERNATIONAL CONFERENCE OF ONLINE ANALYSIS AND COMPUTING SCIENCE (ICOACS), P1, DOI 10.1109/ICOACS.2016.7563036
[6]   A Novel Time-Series Memory Auto-Encoder With Sequentially Updated Reconstructions for Remaining Useful Life Prediction [J].
Fu, Song ;
Zhong, Shisheng ;
Lin, Lin ;
Zhao, Minghang .
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2022, 33 (12) :7114-7125
[7]   Fault diagnosis of electric impact drills using thermal imaging [J].
Glowacz, Adam .
MEASUREMENT, 2021, 171
[8]   Adaptive multi-dimensional Taylor network funnel control of a class of nonlinear systems with asymmetric input saturation [J].
Han, Yu-Qun ;
Li, Na ;
He, Wen-Jing ;
Zhu, Shan-Liang .
INTERNATIONAL JOURNAL OF ADAPTIVE CONTROL AND SIGNAL PROCESSING, 2021, 35 (05) :713-726
[9]  
Huang D., 2017, ARXIV
[10]  
Hubara I, 2016, ADV NEUR IN, V29