CoDF-Net: coordinated-representation decision fusion network for emotion recognition with EEG and eye movement signals

被引:5
作者
Gong, Xinrong [1 ,2 ,3 ]
Dong, Yihan [4 ]
Zhang, Tong [1 ,2 ,3 ]
机构
[1] South China Univ Technol, Sch Comp Sci & Engn, Guangzhou 510006, Guangdong, Peoples R China
[2] Pazhou Lab, Brain & Affect Cognit Res Ctr, Guangzhou 510335, Guangdong, Peoples R China
[3] Minist Educ Hlth Intelligent Percept & Paralleled, Engn Res Ctr, Guangzhou 510006, Guangdong, Peoples R China
[4] Jinan Univ, Sch Journalism & Commun, Guangzhou 510632, Guangdong, Peoples R China
关键词
Broad learning system (BLS); Electroencephalogram (EEG); Eye movement; Multi-modal emotion recognition; Multi-modal fusion; Affective computing;
D O I
10.1007/s13042-023-01964-w
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Physiological signals, such as EEG and eye movements, have emerged as promising research topics in emotion recognition due to their inherent advantages of objectivity, high recognition accuracy, and cost-effectiveness. However, most existing methods for fusing EEG and eye movement signals use concatenation or weighted summation, which may lead to information loss and limited ability to resist noise. To tackle this issue, in this paper, we propose a Coordinated-representation Decision Fusion Network (CoDF-Net) to efficiently fuse the representation of EEG and eye movement signals. Specifically, CoDF-Net first learns personalized information by maximizing the correlation between modalities. Next, the Decision-level Fusion Broad Learning System (DF-BLS) is developed to construct multiple sub-systems to obtain the final emotional states via the effective decision-making mechanism. To evaluate the performance of the proposed method, subject-dependent and subject-independent experiments are designed on two public datasets. Extensive experiments demonstrate that the proposed method has superior emotion recognition performance over traditional approaches and current state-of-the-art methods. The CoDF-Net achieves 94.09 and 91.62% in the subject-dependent setting and 87.04 and 83.87% in the subject-independent setting on the SEED-CHN and SEED-GER datasets, respectively. Moreover, it is found that the proposed method exhibits a more significant ability to resist noise by adding Gaussian noise with different standard deviations.
引用
收藏
页码:1213 / 1226
页数:14
相关论文
共 62 条
[1]  
Alhargan A, 2017, INT CONF AFFECT, P285, DOI 10.1109/ACII.2017.8273614
[2]  
Andrew G., 2013, INT C MACH LEARN, P1247
[3]  
[Anonymous], 2010, A blueprint for affective computing: A sourcebook and manual
[4]   Neural Networks for Emotion Recognition Based on Eye Tracking Data [J].
Aracena, Claudio ;
Basterrech, Sebastian ;
Snasel, Vaclav ;
Velasquez, Juan .
2015 IEEE INTERNATIONAL CONFERENCE ON SYSTEMS, MAN, AND CYBERNETICS (SMC 2015): BIG DATA ANALYTICS FOR HUMAN-CENTRIC SYSTEMS, 2015, :2632-2637
[5]   Multimodal Machine Learning: A Survey and Taxonomy [J].
Baltrusaitis, Tadas ;
Ahuja, Chaitanya ;
Morency, Louis-Philippe .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2019, 41 (02) :423-443
[6]   The pupil as a measure of emotional arousal and autonomic activation [J].
Bradley, Margaret M. ;
Miccoli, Laura ;
Escrig, Miguel A. ;
Lang, Peter J. .
PSYCHOPHYSIOLOGY, 2008, 45 (04) :602-607
[7]   Bagging predictors [J].
Breiman, L .
MACHINE LEARNING, 1996, 24 (02) :123-140
[8]   Broad Learning System: An Effective and Efficient Incremental Learning System Without the Need for Deep Architecture [J].
Chen, C. L. Philip ;
Liu, Zhulin .
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2018, 29 (01) :10-24
[9]   Investigating human reading behavior during sentiment judgment [J].
Chen, Xuesong ;
Mao, Jiaxin ;
Liu, Yiqun ;
Zhang, Min ;
Ma, Shaoping .
INTERNATIONAL JOURNAL OF MACHINE LEARNING AND CYBERNETICS, 2022, 13 (08) :2283-2296
[10]   Emotion recognition in human-computer interaction [J].
Cowie, R ;
Douglas-Cowie, E ;
Tsapatsoulis, N ;
Votsis, G ;
Kollias, S ;
Fellenz, W ;
Taylor, JG .
IEEE SIGNAL PROCESSING MAGAZINE, 2001, 18 (01) :32-80