Multi-domain fusion deep graph convolution neural network for EEG emotion recognition

被引:0
作者
Jinying Bi
Fei Wang
Xin Yan
Jingyu Ping
Yongzhao Wen
机构
[1] Northeastern University,College of Information Science and Engineering
[2] Northeastern University,Faculty of Robot Science and Engineering
来源
Neural Computing and Applications | 2022年 / 34卷
关键词
Emotion recognition; Multi-domain fusion features; Graph convolution neural network; Transfer learning;
D O I
暂无
中图分类号
学科分类号
摘要
Electroencephalogram (EEG)-based emotion recognition has become a hot research field, with the most attention given to decoding three basic types of emotional states (i.e., positive, negative, and neutral) by EEG. Traditional EEG emotion recognition is a single feature input mode that cannot cover multiple feature information. For brain function networks, the feature extraction process is very cumbersome. There will also be individual differences in the characteristics of different subjects. Therefore, the theory of graph convolution and brain function connection is introduced into this research, and the multi-domain fusion features input deep graph convolution neural network (MdGCNN) is proposed in this paper. Pearson correlation is used to determine the adjacency matrix. The Sortpooling layer is employed as a bridge between the graph convolution neural layer and the normal neural network layer and sorts the node features in a consistent order. Based on analyzing the characteristics of a single electrode, the brain topology structure features are automatically extracted. Taking MdGCNN as the basic model and considering the method of minimizing the feature distance between the source and the target domain, we propose a transfer learning (TL) emotion recognition model for cross-subject called MdGCNN-TL. Meanwhile, MdGCNN-TL is extended to traverse the target domain of a single subject in a two-to-one domain form. According to the idea of principal component analysis, the transfer model with a high recognition effect is determined with the degree of subject correlation (SC), and MdGCNN-TL is upgraded to MdGCNN-TL-SC. Experimental analysis on the SEED dataset is performed to evaluate the proposed models. Further validation of the model is implemented on the DEAP dataset. The results show that the proposed model has achieved better performance in EEG emotion recognition.
引用
收藏
页码:22241 / 22255
页数:14
相关论文
共 75 条
  • [1] Zheng W-L(2019)Identifying stable patterns over time for emotion recognition from eeg IEEE Trans. Affective Comput. 10 417-429
  • [2] Zhu J-Y(2019)Intrinsic prior knowledge driven cica fmri data analysis for emotion recognition classification IEEE Access 7 59944-59950
  • [3] Lu B-L(2015)Wavelet-based emotion recognition system using eeg signal Neural Comput Appl 28 1985-1990
  • [4] Chen X(2020)Emotion recognition from multi-channel eeg signals by exploiting the deep belief-conditional random field framework IEEE Access 8 33002-33012
  • [5] Zeng W(2019)Spatial-temporal recurrent neural network for emotion recognition IEEE Trans Cybernet 49 839-847
  • [6] Shi Y(2019)Emotionmeter: a multimodal framework for recognizing human emotions IEEE Trans. Cybernet. 49 1110-1122
  • [7] Deng J(2019)Convolutional neural networks for classification of music-listening eeg: comparing 1d convolutional kernels with 2d kernels and cerebral laterality of musical influence Neural Comput. Appl. 32 8867-8891
  • [8] Ma Y(2021)A bi-hemisphere domain adversarial neural network model for eeg emotion recognition IEEE Trans Affect Comput 12 494-504
  • [9] Mohammadi Z(2019)Convolutional neural networks for classification of music-listening eeg: comparing 1d convolutional kernels with 2d kernels and cerebral laterality of musical influence Neural Comput Appl 32 8867-8891
  • [10] Frounchi J(2014)Classifying different emotional states by means of eeg-based functional connectivity patterns PloS one 9 e95415-93722