EEG-Based Emotion Recognition Using Convolutional Recurrent Neural Network with Multi-Head Self-Attention

被引:30
作者
Hu, Zhangfang [1 ]
Chen, Libujie [1 ,2 ]
Luo, Yuan [1 ,2 ]
Zhou, Jingfan [1 ]
机构
[1] Chongqing Univ Posts & Telecommun, Key Lab Optoelect Informat Sensing & Technol, Chongqing 400065, Peoples R China
[2] Chongqing Univ Posts & Telecommun, Sch Adv Mfg Engn, Chongqing 400065, Peoples R China
来源
APPLIED SCIENCES-BASEL | 2022年 / 12卷 / 21期
基金
中国国家自然科学基金;
关键词
EEG; emotion recognition; CNN; BiLSTM; multi-head self-attention; time-frequency map; CLASSIFICATION; DEEP;
D O I
10.3390/app122111255
中图分类号
O6 [化学];
学科分类号
0703 ;
摘要
Featured Application The proposed method in this study can be used in EEG emotion recognition and achieve better results. In recent years, deep learning has been widely used in emotion recognition, but the models and algorithms in practical applications still have much room for improvement. With the development of graph convolutional neural networks, new ideas for emotional recognition based on EEG have arisen. In this paper, we propose a novel deep learning model-based emotion recognition method. First, the EEG signal is spatially filtered by using the common spatial pattern (CSP), and the filtered signal is converted into a time-frequency map by continuous wavelet transform (CWT). This is used as the input data of the network; then the feature extraction and classification are performed by the deep learning model. We called this model CNN-BiLSTM-MHSA, which consists of a convolutional neural network (CNN), bi-directional long and short-term memory network (BiLSTM), and multi-head self-attention (MHSA). This network is capable of learning the time series and spatial information of EEG emotion signals in depth, smoothing EEG signals and extracting deep features with CNN, learning emotion information of future and past time series with BiLSTM, and improving recognition accuracy with MHSA by reassigning weights to emotion features. Finally, we conducted experiments on the DEAP dataset for sentiment classification, and the experimental results showed that the method has better results than the existing classification. The accuracy of high and low valence, arousal, dominance, and liking state recognition is 98.10%, and the accuracy of four classifications of high and low valence-arousal recognition is 89.33%.
引用
收藏
页数:19
相关论文
共 34 条
[1]  
Al-Nafjan A, 2017, INT J ADV COMPUT SC, V8, P419
[2]   Optimizing spatial filters for robust EEG single-trial analysis [J].
Blankertz, Benjamin ;
Tomioka, Ryota ;
Lemm, Steven ;
Kawanabe, Motoaki ;
Mueller, Klaus-Robert .
IEEE SIGNAL PROCESSING MAGAZINE, 2008, 25 (01) :41-56
[3]   Emotion Recognition from Multiband EEG Signals Using CapsNet [J].
Chao, Hao ;
Dong, Liang ;
Liu, Yongli ;
Lu, Baoyun .
SENSORS, 2019, 19 (09)
[4]   Sleep staging by bidirectional long short-term memory convolution neural network [J].
Chen, Xueyan ;
He, Jie ;
Wu, Xiaoqiang ;
Yan, Wei ;
Wei, Wei .
FUTURE GENERATION COMPUTER SYSTEMS-THE INTERNATIONAL JOURNAL OF ESCIENCE, 2020, 109 :188-196
[5]   A Novel DE-CNN-BiLSTM Multi-Fusion Model for EEG Emotion Recognition [J].
Cui, Fachang ;
Wang, Ruqing ;
Ding, Weiwei ;
Chen, Yao ;
Huang, Liya .
MATHEMATICS, 2022, 10 (04)
[6]  
Defferrard M, 2016, ADV NEUR IN, V29
[7]  
Fares A, 2018, IEEE INT C BIOINFORM, P368, DOI 10.1109/BIBM.2018.8621152
[8]   Optimized Echo State Network with Intrinsic Plasticity for EEG-Based Emotion Recognition [J].
Fourati, Rahma ;
Ammar, Boudour ;
Aouiti, Chaouki ;
Sanchez-Medina, Javier ;
Alimi, Adel M. .
NEURAL INFORMATION PROCESSING (ICONIP 2017), PT II, 2017, 10635 :718-727
[9]  
Gaertner M., 2021, AMCIS '21 Proc, P1
[10]   Predicting Exact Valence and Arousal Values from EEG [J].
Galvao, Filipe ;
Alarcao, Soraia M. ;
Fonseca, Manuel J. .
SENSORS, 2021, 21 (10)