Multiscale Temporal Self-Attention and Dynamical Graph Convolution Hybrid Network for EEG-Based Stereogram Recognition

被引:28
作者
Shen, Lili [1 ]
Sun, Mingyang [1 ]
Li, Qunxia [2 ]
Li, Beichen [1 ]
Pan, Zhaoqing [1 ]
Lei, Jianjun [1 ]
机构
[1] Tianjin Univ, Sch Elect & Informat Engn, Tianjin 300072, Peoples R China
[2] Univ Sci & Technol Beijing, Sch Econ & Management, Beijing 100098, Peoples R China
基金
中国国家自然科学基金;
关键词
Electroencephalography; Convolution; Feature extraction; Electrodes; Brain modeling; Task analysis; Sun; EEG; DRDS; self-attention; multi-scale convolution; dynamical graph convolution; NEURAL-NETWORK; MOTOR IMAGERY; FUNCTIONAL CONNECTIVITY; EMOTION RECOGNITION; CLASSIFICATION; MECHANISMS; ALGORITHM; FATIGUE; SIGNALS;
D O I
10.1109/TNSRE.2022.3173724
中图分类号
R318 [生物医学工程];
学科分类号
0831 ;
摘要
Stereopsis is the ability of human beings to get the 3D perception on real scenarios. The conventional stereopsis measurement is based on subjective judgment for stereograms, leading to be easily affected by personal consciousness. To alleviate the issue, in this paper, the EEG signals evoked by dynamic random dot stereograms (DRDS) are collected for stereogram recognition, which can help the ophthalmologists diagnose strabismus patients even without real-time communication. To classify the collected Electroencephalography (EEG) signals, a novel multi-scale temporal self-attention and dynamical graph convolution hybrid network (MTS-DGCHN) is proposed, including multi-scale temporal self-attention module, dynamical graph convolution module and classification module. Firstly, the multi-scale temporal self-attention module is employed to learn time continuity information, where the temporal self-attention block is designed to highlight the global importance of each time segments in one EEG trial, and the multi-scale convolution block is developed to further extract advanced temporal features in multiple receptive fields. Meanwhile, the dynamical graph convolution module is utilized to capture spatial functional relationships between different EEG electrodes, in which the adjacency matrix of each GCN layer is adaptively tuned to explore the optimal intrinsic relationship. Finally, the temporal and spatial features are fed into the classification module to obtain prediction results. Extensive experiments are conducted on collected datasets i.e., SRDA and SRDB, and the results demonstrate the proposed MTS-DGCHN achieves outstanding classification performance compared with the other methods. The datasets are available at https://github.com/YANGeeg/TJU-SRD-datasets and the code is at https://github.com/YANGeeg/MTS-DGCHN.
引用
收藏
页码:1191 / 1202
页数:12
相关论文
共 59 条
[1]   Filter bank common spatial pattern algorithm on BCI competition IV Datasets 2a and 2b [J].
Ang, Kai Keng ;
Chin, Zheng Yang ;
Wang, Chuanchu ;
Guan, Cuntai ;
Zhang, Haihong .
FRONTIERS IN NEUROSCIENCE, 2012, 6
[2]  
Bashivan P., 2015, INT C LEARN REPR
[3]   Regression-Based Continuous Driving Fatigue Estimation: Toward Practical Implementation [J].
Bose, Rohit ;
Wang, Hongtao ;
Dragomir, Andrei ;
Thakor, Nitish, V ;
Bezerianos, Anastasios ;
Li, Junhua .
IEEE TRANSACTIONS ON COGNITIVE AND DEVELOPMENTAL SYSTEMS, 2020, 12 (02) :323-331
[4]   Validation of dynamic random dot stereotests in pediatric vision screening [J].
Budai, Anna ;
Czigler, Andras ;
Miko-Barath, Eszter ;
Nemes, Vanda A. ;
Horvath, Gabor ;
Pusztai, Agota ;
Pinero, David P. ;
Jando, Gabor .
GRAEFES ARCHIVE FOR CLINICAL AND EXPERIMENTAL OPHTHALMOLOGY, 2019, 257 (02) :413-423
[5]   Emotion recognition from spatiotemporal EEG representations with hybrid convolutional recurrent neural networks via wearable multi-channel headset [J].
Chen, Jingxia ;
Jiang, Dongmei ;
Zhang, Yanning ;
Zhang, Pengwei .
COMPUTER COMMUNICATIONS, 2020, 154 :58-65
[6]   A Transferable Adaptive Domain Adversarial Neural Network for Virtual Reality Augmented EMG-Based Gesture Recognition [J].
Cote-Allard, Ulysse ;
Gagnon-Turcotte, Gabriel ;
Phinyomark, Angkoon ;
Glette, Kyrre ;
Scheme, Erik ;
Laviolette, Francois ;
Gosselin, Benoit .
IEEE TRANSACTIONS ON NEURAL SYSTEMS AND REHABILITATION ENGINEERING, 2021, 29 :546-555
[7]   TSception:A Deep Learning Framework for Emotion Detection Using EEG [J].
Ding, Yi ;
Robinson, Neethu ;
Zeng, Qiuhao ;
Chen, Duo ;
Wai, Aung Aung Phyo ;
Lee, Tih-Shih ;
Guan, Cuntai .
2020 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2020,
[8]   Classification of multi-class motor imagery with a novel hierarchical SVM algorithm for brain-computer interfaces [J].
Dong, Enzeng ;
Li, Changhai ;
Li, Liting ;
Du, Shengzhi ;
Belkacem, Abdelkader Nasreddine ;
Chen, Chao .
MEDICAL & BIOLOGICAL ENGINEERING & COMPUTING, 2017, 55 (10) :1809-1818
[9]   Activation timecourse of ventral visual stream object-recognition areas: High density electrical mapping of perceptual closure processes [J].
Doniger, GM ;
Foxe, JJ ;
Murray, MM ;
Higgins, BA ;
Snodgrass, JG ;
Schroeder, CE ;
Javitt, DC .
JOURNAL OF COGNITIVE NEUROSCIENCE, 2000, 12 (04) :615-621
[10]   An Efficient LSTM Network for Emotion Recognition From Multichannel EEG Signals [J].
Du, Xiaobing ;
Ma, Cuixia ;
Zhang, Guanhua ;
Li, Jinyao ;
Lai, Yu-Kun ;
Zhao, Guozhen ;
Deng, Xiaoming ;
Liu, Yong-Jin ;
Wang, Hongan .
IEEE TRANSACTIONS ON AFFECTIVE COMPUTING, 2022, 13 (03) :1528-1540