AttenEpilepsy: A 2D convolutional network model based on multi-head self-attention

被引:0
|
作者
Ma, Shuang [1 ,2 ]
Wang, Haifeng [1 ,2 ]
Yu, Zhihao [1 ,2 ]
Du, Luyao [4 ]
Zhang, Ming [1 ,2 ]
Fu, Qingxi [2 ,3 ]
机构
[1] Linyi Univ, Sch Informat Sci & Engn, Linyi 276000, Shandong, Peoples R China
[2] Linyi Peoples Hosp, Hlth & Med Big Data Ctr, Linyi 276034, Shandong, Peoples R China
[3] Linyi City Peoples Hosp, Linyi Peoples Hosp Shandong Prov, Linyi 276034, Peoples R China
[4] Wuhan Univ Technol, Sch Automation, Wuhan, Peoples R China
关键词
Long-range dependence; Time-frequency image; Feature extraction; Time-frequency context encoding; Multi-head self-attention; Causal convolution; SLEEP STAGE CLASSIFICATION; LEARNING APPROACH;
D O I
10.1016/j.enganabound.2024.105989
中图分类号
T [工业技术];
学科分类号
08 ;
摘要
The existing epilepsy detection models focus more on local information than the true meaning of long-range dependence when capturing time-frequency image features. This results in imprecise feature vector extraction and room for optimization of detection accuracy. AttenEpilepsy is a novel 2D convolutional network model that uses a multi-head self-attention mechanism to classify epileptic seizure periods, inter-seizure periods, and health states of single-channel EEG signals. The AttenEpilepsy model consists of two parts, namely feature extraction and time-frequency context encoding (STCE). A feature extraction method combining multi-path convolution and adaptive hybrid feature recalibration is proposed, in which multi-path convolution with convolution kernels of different sizes is used to extract relevant multi-scale features from time-frequency images. STCE consists of two modules: multi-head self-attention and causal convolution. A modified multi-head self-attention mechanism is used to model the extracted time-frequency features, and causal convolution is used to analyse the frequency information on the time dependencies. A public dataset from the University of Bonn Epilepsy Research Center is used to evaluate the performance of the AttenEpilepsy model. The experimental results show that the AttenEpilepsy model achieved accuracy (AC), sensitivity (SE), specificity (SP), and F1 score (F1) of 99.81%, 99.82%, 99.89%, and 99.83%, respectively. Further testing of the robustness of the model is conducted by introducing various types of noise into the input data. The proposed AttenEpilepsy network model outperforms the state-of-the-art in terms of various evaluation metrics.
引用
收藏
页数:11
相关论文
共 50 条
  • [41] TLS-MHSA: An Efficient Detection Model for Encrypted Malicious Traffic based on Multi-Head Self-Attention Mechanism
    Chen, Jinfu
    Song, Luo
    Cai, Saihua
    Xie, Haodi
    Yin, Shang
    Ahmad, Bilal
    ACM TRANSACTIONS ON PRIVACY AND SECURITY, 2023, 26 (04)
  • [42] Dual-stream fusion network with multi-head self-attention for multi-modal fake news detection
    Yang, Yimei
    Liu, Jinping
    Yang, Yujun
    Cen, Lihui
    APPLIED SOFT COMPUTING, 2024, 167
  • [43] DMOIT: denoised multi-omics integration approach based on transformer multi-head self-attention mechanism
    Liu, Zhe
    Park, Taesung
    FRONTIERS IN GENETICS, 2024, 15
  • [44] MBGAN: An improved generative adversarial network with multi-head self-attention and bidirectional RNN for time series imputation
    Ni, Qingjian
    Cao, Xuehan
    ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2022, 115
  • [45] SMGformer: integrating STL and multi-head self-attention in deep learning model for multi-step runoff forecasting
    Wang, Wen-chuan
    Gu, Miao
    Hong, Yang-hao
    Hu, Xiao-xue
    Zang, Hong-fei
    Chen, Xiao-nan
    Jin, Yan-guo
    SCIENTIFIC REPORTS, 2024, 14 (01):
  • [46] EPSViTs: A hybrid architecture for image classification based on parameter-shared multi-head self-attention
    Liao, Huixian
    Li, Xiaosen
    Qin, Xiao
    Wang, Wenji
    He, Guodui
    Huang, Haojie
    Guo, Xu
    Chun, Xin
    Zhang, Jinyong
    Fu, Yunqin
    Qin, Zhengyou
    IMAGE AND VISION COMPUTING, 2024, 149
  • [47] Hepatic vessel segmentation based on 3D swin-transformer with inductive biased multi-head self-attention
    Wu, Mian
    Qian, Yinling
    Liao, Xiangyun
    Wang, Qiong
    Heng, Pheng-Ann
    BMC MEDICAL IMAGING, 2023, 23 (01)
  • [48] A multi-head self-attention autoencoder network for fault detection of wind turbine gearboxes under random loads
    Yu, Xiaoxia
    Zhang, Zhigang
    Tang, Baoping
    Zhao, Minghang
    MEASUREMENT SCIENCE AND TECHNOLOGY, 2024, 35 (08)
  • [49] Hepatic vessel segmentation based on 3D swin-transformer with inductive biased multi-head self-attention
    Mian Wu
    Yinling Qian
    Xiangyun Liao
    Qiong Wang
    Pheng-Ann Heng
    BMC Medical Imaging, 23
  • [50] Class token and knowledge distillation for multi-head self-attention speaker verification systems
    Mingote, Victoria
    Miguel, Antonio
    Ortega, Alfonso
    Lleida, Eduardo
    DIGITAL SIGNAL PROCESSING, 2023, 133