A systematic comparison of deep learning methods for EEG time series analysis

被引:12
作者
Walther, Dominik [1 ]
Viehweg, Johannes [1 ]
Haueisen, Jens [2 ]
Maeder, Patrick [1 ,3 ]
机构
[1] Tech Univ Ilmenau, Data Intens Syst & Visualizat Grp dAI SY, Ilmenau, Germany
[2] Tech Univ Ilmenau, Inst Biomed Engn & Informat, Ilmenau, Germany
[3] Friedrich Schiller Univ, Fac Biol Sci, Jena, Germany
关键词
recurrent neural networks; feed forward neural networks; time series analysis; attention; transformer networks; CLASSIFICATION; ALGORITHMS;
D O I
10.3389/fninf.2023.1067095
中图分类号
Q [生物科学];
学科分类号
07 ; 0710 ; 09 ;
摘要
Analyzing time series data like EEG or MEG is challenging due to noisy, high-dimensional, and patient-specific signals. Deep learning methods have been demonstrated to be superior in analyzing time series data compared to shallow learning methods which utilize handcrafted and often subjective features. Especially, recurrent deep neural networks (RNN) are considered suitable to analyze such continuous data. However, previous studies show that they are computationally expensive and difficult to train. In contrast, feed-forward networks (FFN) have previously mostly been considered in combination with hand-crafted and problem-specific feature extractions, such as short time Fourier and discrete wavelet transform. A sought-after are easily applicable methods that efficiently analyze raw data to remove the need for problem-specific adaptations. In this work, we systematically compare RNN and FFN topologies as well as advanced architectural concepts on multiple datasets with the same data preprocessing pipeline. We examine the behavior of those approaches to provide an update and guideline for researchers who deal with automated analysis of EEG time series data. To ensure that the results are meaningful, it is important to compare the presented approaches while keeping the same experimental setup, which to our knowledge was never done before. This paper is a first step toward a fairer comparison of different methodologies with EEG time series data. Our results indicate that a recurrent LSTM architecture with attention performs best on less complex tasks, while the temporal convolutional network (TCN) outperforms all the recurrent architectures on the most complex dataset yielding a 8.61% accuracy improvement. In general, we found the attention mechanism to substantially improve classification results of RNNs. Toward a light-weight and online learning-ready approach, we found extreme learning machines (ELM) to yield comparable results for the less complex tasks.
引用
收藏
页数:17
相关论文
共 83 条
[1]  
Amin SU, 2020, IEEE INT CONF MULTI
[2]  
[Anonymous], 2013, Training recurrent neural networks
[3]   EEG artifact removal-state-of-the-art and guidelines [J].
Antonio Urigueen, Jose ;
Garcia-Zapirain, Begona .
JOURNAL OF NEURAL ENGINEERING, 2015, 12 (03)
[4]  
Bahdanau D, 2016, Arxiv, DOI arXiv:1409.0473
[5]  
Bai SJ, 2018, Arxiv, DOI arXiv:1803.01271
[6]  
BCIIV, 2008, BCI 4 DAT
[7]   Learning to decode human emotions with Echo State Networks [J].
Bozhkov, Lachezar ;
Koprinkova-Hristova, Petia ;
Georgieva, Petia .
NEURAL NETWORKS, 2016, 78 :112-119
[8]  
Cai J, 2018, CHIN CONTR CONF, P9598, DOI 10.23919/ChiCC.2018.8484033
[9]  
Chattopadhyay A, 2019, Arxiv, DOI arXiv:1906.08829
[10]   A Hierarchical Bidirectional GRU Model With Attention for EEG-Based Emotion Classification [J].
Chen, J. X. ;
Jiang, D. M. ;
Zhang, N. .
IEEE ACCESS, 2019, 7 :118530-118540