XAI4EEG: spectral and spatio-temporal explanation of deep learning-based seizure detection in EEG time series

被引:24
作者
Raab, Dominik [1 ]
Theissler, Andreas [1 ]
Spiliopoulou, Myra [2 ]
机构
[1] Aalen Univ Appl Sci, D-73430 Aalen, Germany
[2] Otto von Guericke Univ, D-39106 Magdeburg, Germany
关键词
Explainable AI; SHAP; Deep learning; Machine learning; Epileptic seizures; EEG time series; BLACK-BOX; SUPPORT; SYSTEM; EPILEPSY; SAFETY;
D O I
10.1007/s00521-022-07809-x
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In clinical practice, algorithmic predictions may seriously jeopardise patients' health and thus are required to be validated by medical experts before a final clinical decision is met. Towards that aim, there is need to incorporate explainable artificial intelligence techniques into medical research. In the specific field of epileptic seizure detection there are several machine learning algorithms but less methods on explaining them in an interpretable way. Therefore, we introduce XAI4EEG: an application-aware approach for an explainable and hybrid deep learning-based detection of seizures in multivariate EEG time series. In XAI4EEG, we combine deep learning models and domain knowledge on seizure detection, namely (a) frequency bands, (b) location of EEG leads and (c) temporal characteristics. XAI4EEG encompasses EEG data preparation, two deep learning models and our proposed explanation module visualizing feature contributions that are obtained by two SHAP explainers, each explaining the predictions of one of the two models. The resulting visual explanations provide an intuitive identification of decision-relevant regions in the spectral, spatial and temporal EEG dimensions. To evaluate XAI4EEG, we conducted a user study, where users were asked to assess the outputs of XAI4EEG, while working under time constraints, in order to emulate the fact that clinical diagnosis is done - more often than not - under time pressure. We found that the visualizations of our explanation module (1) lead to a substantially lower time for validating the predictions and (2) leverage an increase in interpretability, trust and confidence compared to selected SHAP feature contribution plots.
引用
收藏
页码:10051 / 10068
页数:18
相关论文
共 100 条
[11]  
Buettner R, 2019, P 40 INT C INF SYST
[12]   Machine Learning Interpretability: A Survey on Methods and Metrics [J].
Carvalho, Diogo, V ;
Pereira, Eduardo M. ;
Cardoso, Jaime S. .
ELECTRONICS, 2019, 8 (08)
[13]   Comparison of different input modalities and network structures for deep learning-based seizure detection [J].
Cho, Kyung-Ok ;
Jang, Hyun-Jong .
SCIENTIFIC REPORTS, 2020, 10 (01)
[14]   Origin and timing of brain lesions in term infants with neonatal encephalopathy [J].
Cowan, F ;
Rutherford, M ;
Groenendaal, F ;
Eken, P ;
Mercuri, E ;
Bydder, GM ;
Meiners, LC ;
Dubowitz, LMS ;
de Vries, LS .
LANCET, 2003, 361 (9359) :736-742
[15]   The epidemiology of the epilepsies in children [J].
Cowan, LD .
MENTAL RETARDATION AND DEVELOPMENTAL DISABILITIES RESEARCH REVIEWS, 2002, 8 (03) :171-181
[16]   Automated artifact removal as preprocessing refines neonatal seizure detection [J].
De Vos, M. ;
Deburchgraeve, W. ;
Cherian, P. J. ;
Matic, V. ;
Swarte, R. M. ;
Govaert, P. ;
Visser, G. H. ;
Van Huffel, S. .
CLINICAL NEUROPHYSIOLOGY, 2011, 122 (12) :2345-2354
[17]  
De Weerd A W, 1999, Electroencephalogr Clin Neurophysiol Suppl, V52, P149
[18]   EEGLAB: an open source toolbox for analysis of single-trial EEG dynamics including independent component analysis [J].
Delorme, A ;
Makeig, S .
JOURNAL OF NEUROSCIENCE METHODS, 2004, 134 (01) :9-21
[19]  
Denemark T., 2016, ELECT IMAGING, V2016, P1, DOI [10.2352/ISSN.2470-1173.2016.8.MWSF-080, DOI 10.2352/ISSN.2470-1173.2016.8.MWSF-080]
[20]   Epilepsy [J].
Devinsky, Orrin ;
Vezzani, Annamaria ;
O'Brien, Terence J. ;
Jette, Nathalie ;
Scheffer, Ingrid E. ;
de Curtis, Marco ;
Perucca, Piero .
NATURE REVIEWS DISEASE PRIMERS, 2018, 4