MGFKD: A semi-supervised multi-source domain adaptation algorithm for cross-subject EEG emotion recognition

被引:9
作者
Zhang, Rui [1 ]
Guo, Huifeng [1 ]
Xu, Zongxin [1 ]
Hu, Yuxia [1 ]
Chen, Mingming [1 ]
Zhang, Lipeng [1 ]
机构
[1] Zhengzhou Univ, Sch Elect & Informat Engn, Henan Key Lab Brain Sci & Brain Comp Interface Tec, Zhengzhou 450001, Peoples R China
基金
中国国家自然科学基金;
关键词
Semi -supervised domain adaptation algorithm; Emotion recognition; Golden subjects; Transfer learning; Negative transfer; KERNEL;
D O I
10.1016/j.brainresbull.2024.110901
中图分类号
Q189 [神经科学];
学科分类号
071006 ;
摘要
Currently, most models rarely consider the negative transfer problem in the research field of cross-subject EEG emotion recognition. To solve this problem, this paper proposes a semi-supervised domain adaptive algorithm based on few labeled samples of target subject, which called multi-domain geodesic flow kernel dynamic distribution alignment (MGFKD). It consists of three modules: 1) GFK common feature extractor: projects the feature distribution of source and target subjects to the Grassmann manifold space, and obtains the latent common features of the two feature distributions through GFK method. 2) Source domain selector: obtains pseudo-labels of the target subject through weak classifier, finds "golden source subjects" by using few known labels of target subjects. 3) Label corrector: uses a dynamic distribution balance strategy to correct the pseudo-labels of the target subject. We conducted comparison experiments on the SEED and SEED-IV datasets, and the results show that MGFKD outperforms unsupervised and semi-supervised domain adaptation algorithms, achieving an average accuracy of 87.51 +/- 7.68% and 68.79 +/- 8.25% on the SEED and SEED-IV datasets with only one labeled sample per video for target subject. Especially when the number of source domains is set as 6 and the number of known labels is set as 5, the accuracy increase to 90.20 +/- 7.57% and 69.99 +/- 7.38%, respectively. The above results prove that our proposed algorithm can efficiently and quickly improve the cross-subject EEG emotion classification performance. Since it only need a small number of labeled samples of new subjects, making it has strong application value in future EEG-based emotion recognition applications.
引用
收藏
页数:9
相关论文
共 39 条
[21]  
Qian Wenxia, 2022, Brain-App. Commun. A J. Bacomics, V2022, P1, DOI [10.1080/27706710.2022.2075241, DOI 10.1080/27706710.2022.2075241]
[22]  
Riding R.J., 1997, EDUC PSYCHOL-UK, V17, P219, DOI DOI 10.1080/0144341970170117
[23]   We are not All Equal: Personalizing Models for Facial Expression Analysis with Transductive Parameter Transfer [J].
Sangineto, Enver ;
Zen, Gloria ;
Ricci, Elisa ;
Sebe, Nicu .
PROCEEDINGS OF THE 2014 ACM CONFERENCE ON MULTIMEDIA (MM'14), 2014, :357-366
[24]   Nonlinear component analysis as a kernel eigenvalue problem [J].
Scholkopf, B ;
Smola, A ;
Muller, KR .
NEURAL COMPUTATION, 1998, 10 (05) :1299-1319
[25]  
Han YS, 2018, Arxiv, DOI arXiv:1703.01135
[26]   Cross-subject EEG emotion recognition using multi-source domain manifold feature selection [J].
She, Qingshan ;
Shi, Xinsheng ;
Fang, Feng ;
Ma, Yuliang ;
Zhang, Yingchun .
COMPUTERS IN BIOLOGY AND MEDICINE, 2023, 159
[27]   Utilizing Deep Learning Towards Multi-Modal Bio-Sensing and Vision-Based Affective Computing [J].
Siddharth ;
Jung, Tzyy-Ping ;
Sejnowski, Terrence J. .
IEEE TRANSACTIONS ON AFFECTIVE COMPUTING, 2022, 13 (01) :96-107
[28]   EEG Emotion Recognition Using Dynamical Graph Convolutional Neural Networks [J].
Song, Tengfei ;
Zheng, Wenming ;
Song, Peng ;
Cui, Zhen .
IEEE TRANSACTIONS ON AFFECTIVE COMPUTING, 2020, 11 (03) :532-541
[29]   Emotion Recognition under Sleep Deprivation Using a Multimodal Residual LSTM Network [J].
Tao, Le-Yan ;
Lu, Bao-Liang .
2020 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2020,
[30]   A deep multi-source adaptation transfer network for cross-subject electroencephalogram emotion recognition [J].
Wang, Fei ;
Zhang, Weiwei ;
Xu, Zongfeng ;
Ping, Jingyu ;
Chu, Hao .
NEURAL COMPUTING & APPLICATIONS, 2021, 33 (15) :9061-9073