Cross-subject EEG emotion recognition using multi-source domain manifold feature selection

被引:28
作者
She, Qingshan [1 ]
Shi, Xinsheng [1 ]
Fang, Feng [2 ]
Ma, Yuliang [1 ]
Zhang, Yingchun [2 ]
机构
[1] Hangzhou Dianzi Univ, Sch Automation, Hangzhou 310018, Zhejiang, Peoples R China
[2] Univ Houston, Dept Biomed Engn, Houston, TX 77204 USA
关键词
Affective brain -computer interface; Emotion recognition; Source domain selection; DIFFERENTIAL ENTROPY FEATURE;
D O I
10.1016/j.compbiomed.2023.106860
中图分类号
Q [生物科学];
学科分类号
07 ; 0710 ; 09 ;
摘要
Recent researches on emotion recognition suggests that domain adaptation, a form of transfer learning, has the capability to solve the cross-subject problem in Affective brain-computer interface (aBCI) field. However, traditional domain adaptation methods perform single to single domain transfer or simply merge different source domains into a larger domain to realize the transfer of knowledge, resulting in negative transfer. In this study, a multi-source transfer learning framework was proposed to promote the performance of multi-source electroen-cephalogram (EEG) emotion recognition. The method first used the data distribution similarity ranking (DDSA) method to select the appropriate source domain for each target domain off-line, and reduced data drift between domains through manifold feature mapping on Grassmann manifold. Meanwhile, the minimum redundancy maximum correlation algorithm (mRMR) was employed to select more representative manifold features and minimized the conditional distribution and marginal distribution of the manifold features, and then learned the domain-invariant classifier by summarizing structural risk minimization (SRM). Finally, the weighted fusion criterion was applied to further improve recognition performance. We compared our method with several state-of-the-art domain adaptation techniques using the SEED and DEAP dataset. Results showed that, compared with the conventional MEDA algorithm, the recognition accuracy of our proposed algorithm on SEED and DEAP dataset were improved by 6.74% and 5.34%, respectively. Besides, compared with TCA, JDA, and other state-of-the-art algorithms, the performance of our proposed method was also improved with the best average accuracy of 86.59% on SEED and 64.40% on DEAP. Our results demonstrated that the proposed multi-source transfer learning framework is more effective and feasible than other state-of-the-art methods in recognizing different emotions by solving the cross-subject problem.
引用
收藏
页数:11
相关论文
共 45 条
[1]   Emotions Recognition Using EEG Signals: A Survey [J].
Alarcao, Soraia M. ;
Fonseca, Manuel J. .
IEEE TRANSACTIONS ON AFFECTIVE COMPUTING, 2019, 10 (03) :374-393
[2]  
Belkin M, 2006, J MACH LEARN RES, V7, P2399
[3]   Similarity constraint style transfer mapping for emotion recognition [J].
Chen, Lei ;
She, Qingshan ;
Meng, Ming ;
Zhang, Qizhong ;
Zhang, Jianhai .
BIOMEDICAL SIGNAL PROCESSING AND CONTROL, 2023, 80
[4]   Domain Adaptation from Multiple Sources: A Domain-Dependent Regularization Approach [J].
Duan, Lixin ;
Xu, Dong ;
Tsang, Ivor Wai-Hung .
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2012, 23 (03) :504-518
[5]  
Duan RN, 2013, I IEEE EMBS C NEUR E, P81, DOI 10.1109/NER.2013.6695876
[6]  
Gong BQ, 2012, PROC CVPR IEEE, P2066, DOI 10.1109/CVPR.2012.6247911
[7]   Multi-Source Domain Transfer Discriminative Dictionary Learning Modeling for Electroencephalogram-Based Emotion Recognition [J].
Gu, Xiaoqing ;
Cai, Weiwei ;
Gao, Ming ;
Jiang, Yizhang ;
Ning, Xin ;
Qian, Pengjiang .
IEEE TRANSACTIONS ON COMPUTATIONAL SOCIAL SYSTEMS, 2022, 9 (06) :1604-1612
[8]  
Hamm Jihun, 2008, P 25 INT C MACH LEAR, P376, DOI [10.1145/1390156.1390204, DOI 10.1145/1390156.1390204]
[9]   JOINT TEMPORAL CONVOLUTIONAL NETWORKS AND ADVERSARIAL DISCRIMINATIVE DOMAIN ADAPTATION FOR EEG-BASED CROSS-SUBJECT EMOTION RECOGNITION [J].
He, Zhipeng ;
Zhong, Yongshi ;
Pan, Jiahui .
2022 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2022, :3214-3218
[10]   Human emotion recognition from EEG-based brain-computer interface using machine learning: a comprehensive review [J].
Houssein, Essam H. ;
Hammad, Asmaa ;
Ali, Abdelmgeid A. .
NEURAL COMPUTING & APPLICATIONS, 2022, 34 (15) :12527-12557