Generalization Across Subjects and Sessions for EEG-based Emotion Recognition Using Multi-source Attention-based Dynamic Residual Transfer

被引:2
作者
Jiang, Wanqing [1 ]
Meng, Gaofeng [2 ]
Jiang, Tianzi [3 ,4 ]
Zuo, Nianming [3 ,4 ]
机构
[1] Univ Chinese Acad Sci, Beijing, Peoples R China
[2] Chinese Acad Sci, Inst Automat, Beijing, Peoples R China
[3] Chinese Acad Sci, Brainnetome Ctr, Beijing, Peoples R China
[4] Chinese Acad Sci, Inst Automat, NLPR, Beijing, Peoples R China
来源
2023 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, IJCNN | 2023年
基金
中国国家自然科学基金;
关键词
Electroencephalogram (EEG); emotion recognition; multi-source domain adaptation; subject-independent;
D O I
10.1109/IJCNN54540.2023.10191587
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
As an important element of emotional brain-computer interfaces, electroencephalography (EEG) signals have made significant progress in emotion recognition due to their high temporal resolution and reliability. However, EEG signals vary widely among individuals and do not satisfy temporal non-stationarity. Furthermore, trained models cannot maintain good classification accuracy for new individuals or new sessions during the inference stage. Although domain adaptation has been employed to address these issues, most approaches that consider different subjects or sessions as a single source domain ignore the large discrepancies between source domains, while methods that consider multi-source domains need to construct a domain adaptation branch for each source domain. Here, we propose a novel emotion recognition method, i.e., multisource attention-based dynamic residual transfer (MS-ADRT). We introduce a dynamic feature extractor, in which the model uses an attention module to induce parameters to vary with the sample, implicitly enabling multi-source domain adaptation by adapting to the sample, thus reducing multi-source domain adaptation to single-source domain adaptation. Maximum mean discrepancy (MMD) and maximum classifier discrepancy (MCD)-based adversarial training are also used to narrow distances between source and target domains and facilitate the feature extractor to mine domain-invariant and sentiment-distinguishable features. We compared our algorithm with representative methods using the SEED and SEED-IV datasets, and experimentally verified that our method outperforms other state-of-the-art approaches. The proposed method provides a more effective transfer learning pathway for EEG-based sentiment analysis under multi-source scenarios.
引用
收藏
页数:8
相关论文
共 33 条
[1]  
Argyriou Andreas, 2006, Advances in Neural Information Processing Systems, V19
[2]  
Bhardwaj A, 2015, 2ND INTERNATIONAL CONFERENCE ON SIGNAL PROCESSING AND INTEGRATED NETWORKS (SPIN) 2015, P180, DOI 10.1109/SPIN.2015.7095376
[3]  
Chen H., 2021, FRONTIERS NEUROSCIEN, V15
[4]   Dynamic Convolution: Attention over Convolution Kernels [J].
Chen, Yinpeng ;
Dai, Xiyang ;
Liu, Mengchen ;
Chen, Dongdong ;
Yuan, Lu ;
Liu, Zicheng .
2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2020), 2020, :11027-11036
[5]   Deep Residual Learning for Image Recognition [J].
He, Kaiming ;
Zhang, Xiangyu ;
Ren, Shaoqing ;
Sun, Jian .
2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, :770-778
[6]  
Hu J, 2018, PROC CVPR IEEE, P7132, DOI [10.1109/TPAMI.2019.2913372, 10.1109/CVPR.2018.00745]
[7]  
Iandola F. N., 2016, ARXIV160207360
[8]   A Review of Domain Adaptation without Target Labels [J].
Kouw, Wouter M. ;
Loog, Marco .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2021, 43 (03) :766-785
[9]   Cross-Subject Emotion Recognition Using Deep Adaptation Networks [J].
Li, He ;
Jin, Yi-Ming ;
Zheng, Wei-Long ;
Lu, Bao-Liang .
NEURAL INFORMATION PROCESSING (ICONIP 2018), PT V, 2018, 11305 :403-413
[10]   Domain Adaptation for EEG Emotion Recognition Based on Latent Representation Similarity [J].
Li, Jinpeng ;
Qiu, Shuang ;
Du, Changde ;
Wang, Yixin ;
He, Huiguang .
IEEE TRANSACTIONS ON COGNITIVE AND DEVELOPMENTAL SYSTEMS, 2020, 12 (02) :344-353