An interpretable and generalizable deep learning model for iEEG-based seizure prediction using prototype learning and contrastive learning

被引:1
作者
Gao, Yikai [1 ,2 ]
Liu, Aiping [2 ]
Cui, Heng [2 ]
Qian, Ruobing [2 ]
Chen, Xun [1 ,2 ]
机构
[1] Department of Neurosurgery, The First Affiliated Hospital of USTC, Division of Life Sciences and Medicine, University of Science and Technology of China, Anhui, Hefei
[2] Department of Electronic Engineering and Information Science, University of Science and Technology of China, Hefei
基金
中国国家自然科学基金;
关键词
Deep learning; Generalizability; Interpretability; Intracranial electroencephalography; Seizure prediction; Signal processing;
D O I
10.1016/j.compbiomed.2024.109257
中图分类号
学科分类号
摘要
Epileptic seizure prediction plays a crucial role in enhancing the quality of life for individuals with epilepsy. Over recent years, a multitude of deep learning-based approaches have emerged to tackle this challenging task, leading to significant advancements. However, the ‘black-box’ nature of deep learning models and the considerable interpatient variability significantly impede their interpretability and generalization, thereby severely hampering their efficacy in real-world clinical applications. To address these issues, our study aims to establish an interpretable and generalizable seizure prediction model that meets the demands of clinical diagnosis. Our method extends self-interpretable prototype learning networks into a novel domain adaptation framework designed specifically for cross-patient seizure prediction. The proposed framework enables patient-level interpretability by tracing the origins of significant prototypes. For instance, it could provide information about the seizure type of the patient to which the prototype belongs. This surpasses the existing sample-level interpretability, which is limited to individual patient samples. To further improve the model's generalization capability, we introduce a contrastive semantic alignment loss constraint to the embedding space, enhancing the robustness of the learned prototypes. We evaluate our proposed model using the Freiburg intracranial electroencephalography (iEEG) dataset, which consists of 20 patients and a total of 82 seizures. The experimental results demonstrated a high sensitivity of 79.0%, a low false prediction rate of 0.183, and a high area under the receiver operating characteristic curve (AUC) value of 0.804, achieving state-of-the-art performance with self-interpretable evidence in contrast to the current cross-patient seizure prediction methods. Our study represents a significant step forward in developing an interpretable and generalizable model for seizure prediction, thereby facilitating the application of deep learning models in clinical diagnosis. © 2024 Elsevier Ltd
引用
收藏
相关论文
共 50 条
  • [31] Interpretable Spatiotemporal Deep Learning Model for Traffic Flow Prediction based on Potential Energy Fields
    Ji, Jiahao
    Wang, Jingyuan
    Jiang, Zhe
    Ma, Jingtian
    Zhang, Hu
    20TH IEEE INTERNATIONAL CONFERENCE ON DATA MINING (ICDM 2020), 2020, : 1076 - 1081
  • [32] Intraoperative Hypotension Prediction Based on Features Automatically Generated Within an Interpretable Deep Learning Model
    Hwang, Eugene
    Park, Yong-Seok
    Kim, Jin-Young
    Park, Sung-Hyuk
    Kim, Junetae
    Kim, Sung-Hoon
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2024, 35 (10) : 13887 - 13901
  • [33] Deep Learning based Lightweight Model for Seizure Detection using Spectrogram Images
    Khan, Mohd Maaz
    Khan, Irfan Mabood
    Farooq, Omar
    2022 10TH INTERNATIONAL SYMPOSIUM ON DIGITAL FORENSICS AND SECURITY (ISDFS), 2022,
  • [34] An interpretable deep learning model to map land subsidence hazard
    Rahmani, Paria
    Gholami, Hamid
    Golzari, Shahram
    ENVIRONMENTAL SCIENCE AND POLLUTION RESEARCH, 2024, 31 (11) : 17372 - 17386
  • [35] A deep learning based health index construction method with contrastive learning
    Wang, Hongfei
    Li, Xiang
    Zhang, Zhuo
    Deng, Xinyang
    Jiang, Wen
    RELIABILITY ENGINEERING & SYSTEM SAFETY, 2024, 242
  • [36] An interpretable deep learning model to map land subsidence hazard
    Paria Rahmani
    Hamid Gholami
    Shahram Golzari
    Environmental Science and Pollution Research, 2024, 31 : 17448 - 17460
  • [37] AudioProtoPNet: An interpretable deep learning model for bird sound classification
    Heinrich, Rene
    Rauch, Lukas
    Sick, Bernhard
    Scholz, Christoph
    ECOLOGICAL INFORMATICS, 2025, 87
  • [38] Development of Biologically Interpretable Multimodal Deep Learning Model for Cancer Prognosis Prediction*
    Azher, Zarif L.
    Vaickus, Louis J.
    Salas, Lucas A.
    Christensen, Brock C.
    Levy, Joshua J.
    37TH ANNUAL ACM SYMPOSIUM ON APPLIED COMPUTING, 2022, : 636 - 644
  • [39] SeizFt: Interpretable Machine Learning for Seizure Detection Using Wearables
    Al-Hussaini, Irfan
    Mitchell, Cassie S.
    BIOENGINEERING-BASEL, 2023, 10 (08):
  • [40] Patient-Specific Seizure Prediction via Adder Network and Supervised Contrastive Learning
    Zhao, Yuchang
    Li, Chang
    Liu, Xiang
    Qian, Ruobing
    Song, Rencheng
    Chen, Xun
    IEEE TRANSACTIONS ON NEURAL SYSTEMS AND REHABILITATION ENGINEERING, 2022, 30 : 1536 - 1547