A multi-head self-attention autoencoder network for fault detection of wind turbine gearboxes under random loads

被引:4
作者
Yu, Xiaoxia [1 ]
Zhang, Zhigang [1 ]
Tang, Baoping [2 ]
Zhao, Minghang [3 ]
机构
[1] Chongqing Univ Technol, Coll Mech Engn, Chongqing 400054, Peoples R China
[2] Chongqing Univ, Coll Mech Engn, Chongqing 400044, Peoples R China
[3] Harbin Inst Technol, Sch Ocean Engn, Weihai 264209, Shandong, Peoples R China
关键词
multihead self-attention; fault detection; dynamic warning threshold (DWT); wind turbine gearbox;
D O I
10.1088/1361-6501/ad4dd4
中图分类号
T [工业技术];
学科分类号
08 ;
摘要
Wind turbine gearboxes work under random load for extended periods of time, and the fault detection indicator constructed by the existing deep learning models fluctuate constantly due to the load, which is easy to cause frequent false alarms. Therefore, a multihead self-attention autoencoder network is proposed and combined with a dynamic alarm threshold to detect faults in a wind turbine gearbox subjected to random loads. The multiheaded attention mechanism layer enhances the feature-extraction capability of the proposed network by extracting global and local features from input data. Furthermore, to suppress the influence of the random load, a dynamic warning threshold was designed based on the reconstruction error between the inputs and outputs of the proposed network. Finally, the effectiveness of the proposed method was verified using the vibration data of wind turbine gearboxes from an actual wind farm.
引用
收藏
页数:11
相关论文
共 50 条
  • [41] A spatial-spectral fusion convolutional transformer network with contextual multi-head self-attention for hyperspectral image classification
    Wang, Wuli
    Sun, Qi
    Zhang, Li
    Ren, Peng
    Wang, Jianbu
    Ren, Guangbo
    Liu, Baodi
    NEURAL NETWORKS, 2025, 187
  • [42] TLS-MHSA: An Efficient Detection Model for Encrypted Malicious Traffic based on Multi-Head Self-Attention Mechanism
    Chen, Jinfu
    Song, Luo
    Cai, Saihua
    Xie, Haodi
    Yin, Shang
    Ahmad, Bilal
    ACM TRANSACTIONS ON PRIVACY AND SECURITY, 2023, 26 (04)
  • [43] Automatic segmentation of golden pomfret based on fusion of multi-head self-attention and channel-attention mechanism
    Yu, Guoyan
    Luo, Yingtong
    Deng, Ruoling
    COMPUTERS AND ELECTRONICS IN AGRICULTURE, 2022, 202
  • [44] DMOIT: denoised multi-omics integration approach based on transformer multi-head self-attention mechanism
    Liu, Zhe
    Park, Taesung
    FRONTIERS IN GENETICS, 2024, 15
  • [45] Multi-fidelity fusion for soil classification via LSTM and multi-head self-attention CNN model
    Zhou, Xiaoqi
    Sheil, Brian
    Suryasentana, Stephen
    Shi, Peixin
    ADVANCED ENGINEERING INFORMATICS, 2024, 62
  • [46] EPSViTs: A hybrid architecture for image classification based on parameter-shared multi-head self-attention
    Liao, Huixian
    Li, Xiaosen
    Qin, Xiao
    Wang, Wenji
    He, Guodui
    Huang, Haojie
    Guo, Xu
    Chun, Xin
    Zhang, Jinyong
    Fu, Yunqin
    Qin, Zhengyou
    IMAGE AND VISION COMPUTING, 2024, 149
  • [47] SMGformer: integrating STL and multi-head self-attention in deep learning model for multi-step runoff forecasting
    Wang, Wen-chuan
    Gu, Miao
    Hong, Yang-hao
    Hu, Xiao-xue
    Zang, Hong-fei
    Chen, Xiao-nan
    Jin, Yan-guo
    SCIENTIFIC REPORTS, 2024, 14 (01):
  • [48] Wind Turbine Condition Monitoring Using the SSA-Optimized Self-Attention BiLSTM Network and Changepoint Detection Algorithm
    Yan, Junshuai
    Liu, Yongqian
    Li, Li
    Ren, Xiaoying
    SENSORS, 2023, 23 (13)
  • [49] Efficient Road Traffic Video Congestion Classification Based on the Multi-Head Self-Attention Vision Transformer Model
    Khalladi, Sofiane Abdelkrim
    Ouessai, Asmaa
    Benamara, Nadir Kamel
    Keche, Mokhtar
    TRANSPORT AND TELECOMMUNICATION JOURNAL, 2024, 25 (01) : 20 - 30
  • [50] Multi-head self-attention based gated graph convolutional networks for aspect-based sentiment classification
    Xiao, Luwei
    Hu, Xiaohui
    Chen, Yinong
    Xue, Yun
    Chen, Bingliang
    Gu, Donghong
    Tang, Bixia
    MULTIMEDIA TOOLS AND APPLICATIONS, 2022, 81 (14) : 19051 - 19070