AttenEpilepsy: A 2D convolutional network model based on multi-head self-attention

被引:0
|
作者
Ma, Shuang [1 ,2 ]
Wang, Haifeng [1 ,2 ]
Yu, Zhihao [1 ,2 ]
Du, Luyao [4 ]
Zhang, Ming [1 ,2 ]
Fu, Qingxi [2 ,3 ]
机构
[1] Linyi Univ, Sch Informat Sci & Engn, Linyi 276000, Shandong, Peoples R China
[2] Linyi Peoples Hosp, Hlth & Med Big Data Ctr, Linyi 276034, Shandong, Peoples R China
[3] Linyi City Peoples Hosp, Linyi Peoples Hosp Shandong Prov, Linyi 276034, Peoples R China
[4] Wuhan Univ Technol, Sch Automation, Wuhan, Peoples R China
关键词
Long-range dependence; Time-frequency image; Feature extraction; Time-frequency context encoding; Multi-head self-attention; Causal convolution; SLEEP STAGE CLASSIFICATION; LEARNING APPROACH;
D O I
10.1016/j.enganabound.2024.105989
中图分类号
T [工业技术];
学科分类号
08 ;
摘要
The existing epilepsy detection models focus more on local information than the true meaning of long-range dependence when capturing time-frequency image features. This results in imprecise feature vector extraction and room for optimization of detection accuracy. AttenEpilepsy is a novel 2D convolutional network model that uses a multi-head self-attention mechanism to classify epileptic seizure periods, inter-seizure periods, and health states of single-channel EEG signals. The AttenEpilepsy model consists of two parts, namely feature extraction and time-frequency context encoding (STCE). A feature extraction method combining multi-path convolution and adaptive hybrid feature recalibration is proposed, in which multi-path convolution with convolution kernels of different sizes is used to extract relevant multi-scale features from time-frequency images. STCE consists of two modules: multi-head self-attention and causal convolution. A modified multi-head self-attention mechanism is used to model the extracted time-frequency features, and causal convolution is used to analyse the frequency information on the time dependencies. A public dataset from the University of Bonn Epilepsy Research Center is used to evaluate the performance of the AttenEpilepsy model. The experimental results show that the AttenEpilepsy model achieved accuracy (AC), sensitivity (SE), specificity (SP), and F1 score (F1) of 99.81%, 99.82%, 99.89%, and 99.83%, respectively. Further testing of the robustness of the model is conducted by introducing various types of noise into the input data. The proposed AttenEpilepsy network model outperforms the state-of-the-art in terms of various evaluation metrics.
引用
收藏
页数:11
相关论文
共 50 条
  • [1] A malicious network traffic detection model based on bidirectional temporal convolutional network with multi-head self-attention mechanism
    Cai, Saihua
    Xu, Han
    Liu, Mingjie
    Chen, Zhilin
    Zhang, Guofeng
    COMPUTERS & SECURITY, 2024, 136
  • [2] CPMA: Spatio-Temporal Network Prediction Model Based on Convolutional Parallel Multi-head Self-attention
    Liu, Tiantian
    You, Xin
    Ma, Ming
    ADVANCED INTELLIGENT COMPUTING TECHNOLOGY AND APPLICATIONS, PT II, ICIC 2024, 2024, 14876 : 113 - 124
  • [3] Convolutional multi-head self-attention on memory for aspect sentiment classification
    Zhang, Yaojie
    Xu, Bing
    Zhao, Tiejun
    IEEE-CAA JOURNAL OF AUTOMATICA SINICA, 2020, 7 (04) : 1038 - 1044
  • [4] Convolutional Multi-Head Self-Attention on Memory for Aspect Sentiment Classification
    Yaojie Zhang
    Bing Xu
    Tiejun Zhao
    IEEE/CAAJournalofAutomaticaSinica, 2020, 7 (04) : 1038 - 1044
  • [5] Multi-head enhanced self-attention network for novelty detection
    Zhang, Yingying
    Gong, Yuxin
    Zhu, Haogang
    Bai, Xiao
    Tang, Wenzhong
    PATTERN RECOGNITION, 2020, 107
  • [6] A Model for Sea Ice Segmentation based on Feature Pyramid Network and Multi-head Self-attention
    Xu, Yuanxiang
    Feng, Yuan
    Song, Shengyu
    Liu, Jiahao
    PROCEEDINGS OF THE 2024 27 TH INTERNATIONAL CONFERENCE ON COMPUTER SUPPORTED COOPERATIVE WORK IN DESIGN, CSCWD 2024, 2024, : 97 - 102
  • [7] Personalized multi-head self-attention network for news recommendation
    Zheng, Cong
    Song, Yixuan
    NEURAL NETWORKS, 2025, 181
  • [8] Text summarization based on multi-head self-attention mechanism and pointer network
    Qiu, Dong
    Yang, Bing
    COMPLEX & INTELLIGENT SYSTEMS, 2022, 8 (01) : 555 - 567
  • [9] MSASGCN : Multi-Head Self-Attention Spatiotemporal Graph Convolutional Network for Traffic Flow Forecasting
    Cao, Yang
    Liu, Detian
    Yin, Qizheng
    Xue, Fei
    Tang, Hengliang
    JOURNAL OF ADVANCED TRANSPORTATION, 2022, 2022
  • [10] Text summarization based on multi-head self-attention mechanism and pointer network
    Dong Qiu
    Bing Yang
    Complex & Intelligent Systems, 2022, 8 : 555 - 567