Cross-Subject EEG-Based Emotion Recognition Using Deep Metric Learning and Adversarial Training

被引:5
作者
Alameer, Hawraa Razzaq Abed [1 ]
Salehpour, Pedram [1 ]
Hadi Aghdasi, Seyyed [1 ]
Feizi-Derakhshi, Mohammad-Reza [1 ]
机构
[1] Univ Tabriz, Fac Elect & Comp Engn, Dept Comp Engn, Tabriz 51666, Iran
关键词
Electroencephalography; Emotion recognition; Brain modeling; Training; Accuracy; Feature extraction; Data models; Deep learning; Adversarial machine learning; EEG signals; cross-subject emotion recognition; deep metric learning; adversarial learning;
D O I
10.1109/ACCESS.2024.3458833
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Nowadays, due to individual differences and the non-stationarity properties of EEG signals, developing an accurate cross-subject EEG emotion recognition method is in demand. Despite many successful attempts, the accuracy of generalized models across subjects is inferior compared to those limited to a specific individual. Moreover, most cross-subject training methods assume that the unlabeled data from target subjects is available. However, this assumption does not hold in practice. To address these issues, this paper presents a novel deep similarity learning loss specific to the emotion recognition task. This loss function minimizes intra-emotion class variations of EEG segments with different subject labels while maximizing inter-emotion class variations. Another key aspect of the proposed semantic embedding loss is that it preserves the order of emotion classes in the learned embedding. Specifically, it ensures that the embedding space maintains the semantic order of emotions. Also, we integrate the deep similarity learning module with adversarial learning, which helps to learn a subject-invariant representation of EEG signals in an end-to-end training paradigm. We conduct several experiments on three widely used datasets: SEED, SEED-GER, and DEAP. The results confirm that the proposed method effectively learns a subject invariant representation from EEG signals and consistently outperforms the state-of-the-art (SOTA) peer methods.
引用
收藏
页码:130241 / 130252
页数:12
相关论文
共 32 条
[1]   Uncovering the structure of clinical EEG signals with self-supervised learning [J].
Banville, Hubert ;
Chehab, Omar ;
Hyvarinen, Aapo ;
Engemann, Denis-Alexander ;
Gramfort, Alexandre .
JOURNAL OF NEURAL ENGINEERING, 2021, 18 (04)
[2]   DOMAIN-INVARIANT REPRESENTATION LEARNING FROM EEG WITH PRIVATE ENCODERS [J].
Bethge, David ;
Hallgarten, Philipp ;
Grosse-Puppendahl, Tobias ;
Kari, Mohamed ;
Mikut, Ralf ;
Schmidt, Albrecht ;
Oezdenizci, Ozan .
2022 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2022, :1236-1240
[3]   Unsupervised domain adaptation techniques based on auto-encoder for non-stationary EEG-based emotion recognition [J].
Chai, Xin ;
Wang, Qisong ;
Zhao, Yongping ;
Liu, Xin ;
Bai, Ou ;
Li, Yongqiang .
COMPUTERS IN BIOLOGY AND MEDICINE, 2016, 79 :205-214
[4]   MS-MDA: Multisource Marginal Distribution Adaptation for Cross-Subject and Cross-Session EEG Emotion Recognition [J].
Chen, Hao ;
Jin, Ming ;
Li, Zhunan ;
Fan, Cunhang ;
Li, Jinpeng ;
He, Huiguang .
FRONTIERS IN NEUROSCIENCE, 2021, 15
[5]   Discriminative adversarial domain generalization with meta-learning based cross-domain validation [J].
Chen, Keyu ;
Zhuang, Di ;
Chang, J. Morris .
NEUROCOMPUTING, 2022, 467 :418-426
[6]  
Chen T, 2020, PR MACH LEARN RES, V119
[7]   Unsupervised Visual Domain Adaptation Using Subspace Alignment [J].
Fernando, Basura ;
Habrard, Amaury ;
Sebban, Marc ;
Tuytelaars, Tinne .
2013 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2013, :2960-2967
[8]  
Hermans A, 2017, Arxiv, DOI arXiv:1703.07737
[9]   Emotion recognition through facial expression analysis based on a neurofuzzy network [J].
Ioannou, SV ;
Raouzaiou, AT ;
Tzouvaras, VA ;
Mailis, TP ;
Karpouzis, KC ;
Kollias, SD .
NEURAL NETWORKS, 2005, 18 (04) :423-435
[10]  
Kingma Diederik P, 2014, ARXIV PREPRINT ARXIV