AttenEpilepsy: A 2D convolutional network model based on multi-head self-attention

被引:0
|
作者
Ma, Shuang [1 ,2 ]
Wang, Haifeng [1 ,2 ]
Yu, Zhihao [1 ,2 ]
Du, Luyao [4 ]
Zhang, Ming [1 ,2 ]
Fu, Qingxi [2 ,3 ]
机构
[1] Linyi Univ, Sch Informat Sci & Engn, Linyi 276000, Shandong, Peoples R China
[2] Linyi Peoples Hosp, Hlth & Med Big Data Ctr, Linyi 276034, Shandong, Peoples R China
[3] Linyi City Peoples Hosp, Linyi Peoples Hosp Shandong Prov, Linyi 276034, Peoples R China
[4] Wuhan Univ Technol, Sch Automation, Wuhan, Peoples R China
关键词
Long-range dependence; Time-frequency image; Feature extraction; Time-frequency context encoding; Multi-head self-attention; Causal convolution; SLEEP STAGE CLASSIFICATION; LEARNING APPROACH;
D O I
10.1016/j.enganabound.2024.105989
中图分类号
T [工业技术];
学科分类号
08 ;
摘要
The existing epilepsy detection models focus more on local information than the true meaning of long-range dependence when capturing time-frequency image features. This results in imprecise feature vector extraction and room for optimization of detection accuracy. AttenEpilepsy is a novel 2D convolutional network model that uses a multi-head self-attention mechanism to classify epileptic seizure periods, inter-seizure periods, and health states of single-channel EEG signals. The AttenEpilepsy model consists of two parts, namely feature extraction and time-frequency context encoding (STCE). A feature extraction method combining multi-path convolution and adaptive hybrid feature recalibration is proposed, in which multi-path convolution with convolution kernels of different sizes is used to extract relevant multi-scale features from time-frequency images. STCE consists of two modules: multi-head self-attention and causal convolution. A modified multi-head self-attention mechanism is used to model the extracted time-frequency features, and causal convolution is used to analyse the frequency information on the time dependencies. A public dataset from the University of Bonn Epilepsy Research Center is used to evaluate the performance of the AttenEpilepsy model. The experimental results show that the AttenEpilepsy model achieved accuracy (AC), sensitivity (SE), specificity (SP), and F1 score (F1) of 99.81%, 99.82%, 99.89%, and 99.83%, respectively. Further testing of the robustness of the model is conducted by introducing various types of noise into the input data. The proposed AttenEpilepsy network model outperforms the state-of-the-art in terms of various evaluation metrics.
引用
收藏
页数:11
相关论文
共 50 条
  • [21] Multi-modal multi-head self-attention for medical VQA
    Vasudha Joshi
    Pabitra Mitra
    Supratik Bose
    Multimedia Tools and Applications, 2024, 83 : 42585 - 42608
  • [22] MASPP and MWASP: multi-head self-attention based modules for UNet network in melon spot segmentation
    Tran, Khoa-Dang
    Ho, Trang-Thi
    Huang, Yennun
    Le, Nguyen Quoc Khanh
    Tuan, Le Quoc
    Ho, Van Lam
    JOURNAL OF FOOD MEASUREMENT AND CHARACTERIZATION, 2024, 18 (5) : 3935 - 3949
  • [23] A Multi-Head Self-Attention Transformer-Based Model for Traffic Situation Prediction in Terminal Areas
    Yu, Zhou
    Shi, Xingyu
    Zhang, Zhaoning
    IEEE ACCESS, 2023, 11 : 16156 - 16165
  • [24] Remaining mechanical useful life prediction for circuit breaker based on convolutional variational autoencoder and multi-head self-attention
    Sun S.
    Wang Z.
    Chen J.
    Huang G.
    Wang J.
    Yi Qi Yi Biao Xue Bao/Chinese Journal of Scientific Instrument, 2024, 45 (03): : 106 - 118
  • [25] An integrated multi-head dual sparse self-attention network for remaining useful life prediction
    Zhang, Jiusi
    Li, Xiang
    Tian, Jilun
    Luo, Hao
    Yin, Shen
    RELIABILITY ENGINEERING & SYSTEM SAFETY, 2023, 233
  • [26] Riding feeling recognition based on multi-head self-attention LSTM for driverless automobile
    Tang, Xianzhi
    Xie, Yongjia
    Li, Xinlong
    Wang, Bo
    PATTERN RECOGNITION, 2025, 159
  • [27] Multi-Head Self-Attention Transformation Networks for Aspect-Based Sentiment Analysis
    Lin, Yuming
    Wang, Chaoqiang
    Song, Hao
    Li, You
    IEEE ACCESS, 2021, 9 : 8762 - 8770
  • [28] MSIN: An Efficient Multi-head Self-attention Framework for Inertial Navigation
    Shi, Gaotao
    Pan, Bingjia
    Ni, Yuzhi
    ALGORITHMS AND ARCHITECTURES FOR PARALLEL PROCESSING, ICA3PP 2023, PT I, 2024, 14487 : 455 - 473
  • [29] Hunt for Unseen Intrusion: Multi-Head Self-Attention Neural Detector
    Seo, Seongyun
    Han, Sungmin
    Park, Janghyeon
    Shim, Shinwoo
    Ryu, Han-Eul
    Cho, Byoungmo
    Lee, Sangkyun
    IEEE ACCESS, 2021, 9 : 129635 - 129647
  • [30] Efficient Road Traffic Video Congestion Classification Based on the Multi-Head Self-Attention Vision Transformer Model
    Khalladi, Sofiane Abdelkrim
    Ouessai, Asmaa
    Benamara, Nadir Kamel
    Keche, Mokhtar
    TRANSPORT AND TELECOMMUNICATION JOURNAL, 2024, 25 (01) : 20 - 30