Examining Effects of Schizophrenia on EEG with Explainable Deep Learning Models

被引:9
作者
Ellis, Charles A. [1 ,2 ]
Sattiraju, Abhinav [1 ,2 ]
Miller, Robyn [1 ,2 ]
Calhoun, Vince [1 ,2 ]
机构
[1] Georgia State Univ, Georgia Inst Technol, Triinst Ctr Translat Res Neuroimaging & Data Sci, Atlanta, GA 30303 USA
[2] Emory Univ, Atlanta, GA 30322 USA
来源
2022 IEEE 22ND INTERNATIONAL CONFERENCE ON BIOINFORMATICS AND BIOENGINEERING (BIBE 2022) | 2022年
基金
美国国家科学基金会;
关键词
schizophrenia; deep learning; diagnosis; explainable AI;
D O I
10.1109/BIBE55377.2022.00068
中图分类号
R318 [生物医学工程];
学科分类号
0831 ;
摘要
Schizophrenia (SZ) is a mental disorder that affects millions of people globally. At this time, diagnosis of SZ is based upon symptoms, which can vary from patient to patient and create difficulty with diagnosis. To address this issue, researchers have begun to look for neurological biomarkers of SZ and develop methods for automated diagnosis. In recent years, several studies have applied deep learning to raw EEG for automated SZ diagnosis. However, the use of raw time-series data makes explainability more difficult than it is for traditional machine learning algorithms trained on manually engineered features. As such, none of these studies have sought to explain their models, which is problematic within a healthcare context where explainability is a critical component. In this study, we apply perturbation-based explainability approaches to gain insight into the spectral and spatial features learned by two distinct deep learning models trained on raw EEG for SZ diagnosis for the first time. We develop convolutional neural network (CNN) and CNN long short-term memory network (CNN-LSTM) architectures. Results show that both models prioritize the T8 and C3 electrodes delta- and gamma-bands, which agrees with previous literature and supports the overall utility of our models. This study represents a step forward in the implementation of deep learning models for clinical SZ diagnosis, and it is our hope that it will inspire the more widespread application of explainability methods for insight into deep learning models trained for SZ diagnosis in the future.
引用
收藏
页码:301 / 304
页数:4
相关论文
共 13 条
  • [1] Explainability for artificial intelligence in healthcare: a multidisciplinary perspective
    Amann, Julia
    Blasimme, Alessandro
    Vayena, Effy
    Frey, Dietmar
    Madai, Vince I.
    [J]. BMC MEDICAL INFORMATICS AND DECISION MAKING, 2020, 20 (01)
  • [2] Barascud N, meegkit: EEG and Meg denoising in Python
  • [3] An efficient classifier to diagnose of schizophrenia based on the EEG signals
    Boostani, Reza
    Sadatnezhad, Khadijeh
    Sabeti, Malihe
    [J]. EXPERT SYSTEMS WITH APPLICATIONS, 2009, 36 (03) : 6492 - 6499
  • [4] Buettner R, 2020, HAWAII INT C SYSTEM, P3216, DOI DOI 10.24251/HICSS.2020.393
  • [5] Ellis C. A., 2021, BIORXIV
  • [6] Ellis C.A., 2021, 2021 IEEE 21 INT C B, P0
  • [7] Ellis C. A., 2022, bioRxiv
  • [8] A multimodal magnetoencephalography 7 T fMRI and 7 T proton MR spectroscopy study in first episode psychosis
    Gawne, Timothy J.
    Overbeek, Gregory J.
    Killen, Jeffery F.
    Reid, Meredith A.
    Kraguljac, Nina V.
    Denney, Thomas S.
    Ellis, Charles A.
    Lahti, Adrienne C.
    [J]. NPJ SCHIZOPHRENIA, 2020, 6 (01):
  • [9] High vs Low Frequency Neural Oscillations in Schizophrenia
    Moran, Lauren V.
    Hong, L. Elliot
    [J]. SCHIZOPHRENIA BULLETIN, 2011, 37 (04) : 659 - 663
  • [10] EEG in schizophrenic patients: mutual information analysis
    Na, SH
    Jin, SH
    Kim, SY
    Ham, BJ
    [J]. CLINICAL NEUROPHYSIOLOGY, 2002, 113 (12) : 1954 - 1960