EEG-ConvTransformer for single-trial EEG-based visual stimulus classification

被引:70
作者
Bagchi, Subhranil [1 ]
Bathula, Deepti R. [1 ]
机构
[1] Indian Inst Technol Ropar, Dept Comp Sci & Engn, Rupnagar 140001, Punjab, India
关键词
EEG; Visual stimulus classification; Deep learning; Transformer; Multi-head attention; Inter-region similarity; Temporal convolution; Inter-head diversity; Head representations; NEURAL-NETWORKS;
D O I
10.1016/j.patcog.2022.108757
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Different categories of visual stimuli evoke distinct activation patterns in the human brain. These patterns can be captured with EEG for utilization in application such as Brain-Computer Interface (BCI). However, accurate classification of these patterns acquired using single-trial data is challenging due to the low signal-to-noise ratio of EEG. Recently, deep learning-based transformer models with multi-head self-attention have shown great potential for analyzing variety of data. This work introduces an EEGConvTranformer network that is based on both multi-headed self-attention and temporal convolution. The novel architecture incorporates self-attention modules to capture inter-region interaction patterns and convolutional filters to learn temporal patterns in a single module. Experimental results demonstrate that EEG-ConvTransformer achieves improved classification accuracy over state-of-the-art techniques across five different visual stimulus classification tasks. Finally, quantitative analysis of inter-head diversity also shows low similarity in representational space, emphasizing the implicit diversity of multi-head attention.(c) 2022 Elsevier Ltd. All rights reserved.
引用
收藏
页数:9
相关论文
共 39 条
[1]  
Alfeld P., 1984, Computer-Aided Geometric Design, V1, P169, DOI 10.1016/0167-8396(84)90029-3
[2]  
Bagchi S, 2021, EUR SIGNAL PR CONF, P1276, DOI 10.23919/EUSIPCO54536.2021.9615945
[3]  
Bashivan P., 2016, Learning Representations from EEG with Deep Recurrent-Convolutional Neural Networks
[4]  
Beyer L., 2020, INT CONFER ENCE LEAR
[5]   Single-trial ERP Feature Extraction and Classification for Visual Object Recognition Task [J].
Bobe, Anatoly S. ;
Alekseev, Andrey S. ;
Komarova, Maria, V ;
Fastovets, Dmitry .
FIFTH INTERNATIONAL CONFERENCE ON ENGINEERING AND TELECOMMUNICATION (ENT-MIPT 2018), 2018, :188-192
[6]   Xception: Deep Learning with Depthwise Separable Convolutions [J].
Chollet, Francois .
30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, :1800-1807
[7]   Brain computer interface control via functional connectivity dynamics [J].
Daly, Ian ;
Nasuto, Slawomir J. ;
Warwick, Kevin .
PATTERN RECOGNITION, 2012, 45 (06) :2123-2136
[8]   Robustness of functional connectivity metrics for EEG-based personal identification over task-induced intra-class and inter-class variations [J].
Fraschini, Matteo ;
Pani, Sara Maria ;
Didaci, Luca ;
Marcialis, Gian Luca .
PATTERN RECOGNITION LETTERS, 2019, 125 :49-54
[9]   Attention-Based Parallel Multiscale Convolutional Neural Network for Visual Evoked Potentials EEG Classification [J].
Gao, Zhongke ;
Sun, Xinlin ;
Liu, Mingxu ;
Dang, Weidong ;
Ma, Chao ;
Chen, Guanrong .
IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS, 2021, 25 (08) :2887-2894
[10]   The neural basis of object perception [J].
Grill-Spector, K .
CURRENT OPINION IN NEUROBIOLOGY, 2003, 13 (02) :159-166