Enhancing cross-subject EEG emotion recognition through multi-source manifold metric transfer learning

被引:1
|
作者
Shi X. [1 ]
She Q. [1 ,4 ]
Fang F. [2 ]
Meng M. [1 ,4 ]
Tan T. [3 ]
Zhang Y. [2 ]
机构
[1] School of Automation, Hangzhou Dianzi University, Zhejiang, Hangzhou
[2] Department of Biomedical Engineering, University of Miami, Coral Gables, FL
[3] Department of Rehabilitation, Medicine, Zhejiang Provincial People's Hospital, People's Hospital of Hangzhou Medical College, Zhejiang, Hangzhou
[4] International Joint Research Laboratory for Autonomous Robotic Systems, Zhejiang, Hangzhou
基金
中国国家自然科学基金;
关键词
Affective brain-computer interface (aBCI); Electroencephalogram (EEG); Emotion recognition; Metric transfer learning;
D O I
10.1016/j.compbiomed.2024.108445
中图分类号
学科分类号
摘要
Transfer learning (TL) has demonstrated its efficacy in addressing the cross-subject domain adaptation challenges in affective brain-computer interfaces (aBCI). However, previous TL methods usually use a stationary distance, such as Euclidean distance, to quantify the distribution dissimilarity between two domains, overlooking the inherent links among similar samples, potentially leading to suboptimal feature mapping. In this study, we introduced a novel algorithm called multi-source manifold metric transfer learning (MSMMTL) to enhance the efficacy of conventional TL. Specifically, we first selected the source domain based on Mahalanobis distance to enhance the quality of the source domains and then used manifold feature mapping approach to map the source and target domains on the Grassmann manifold to mitigate data drift between domains. In this newly established shared space, we optimized the Mahalanobis metric by maximizing the inter-class distances while minimizing the intra-class distances in the target domain. Recognizing that significant distribution discrepancies might persist across different domains even on the manifold, to ensure similar distributions between the source and target domains, we further imposed constraints on both domains under the Mahalanobis metric. This approach aims to reduce distributional disparities and enhance the electroencephalogram (EEG) emotion recognition performance. In cross-subject experiments, the MSMMTL model exhibits average classification accuracies of 88.83 % and 65.04 % for SEED and DEAP, respectively, underscoring the superiority of our proposed MSMMTL over other state-of-the-art methods. MSMMTL can effectively solve the problem of individual differences in EEG-based affective computing. © 2024 Elsevier Ltd
引用
收藏
相关论文
共 50 条
  • [41] Enhanced Subspace Alignment with Clustering and Weighting for Cross-Subject Multi-Session EEG-based Emotion Recognition
    Shirkarami, Mohsen
    Mohammadzade, Hoda
    2023 30TH NATIONAL AND 8TH INTERNATIONAL IRANIAN CONFERENCE ON BIOMEDICAL ENGINEERING, ICBME, 2023, : 104 - 109
  • [42] Joint Feature Adaptation and Graph Adaptive Label Propagation for Cross-Subject Emotion Recognition From EEG Signals
    Peng, Yong
    Wang, Wenjuan
    Kong, Wanzeng
    Nie, Feiping
    Lu, Bao-Liang
    Cichocki, Andrzej
    IEEE TRANSACTIONS ON AFFECTIVE COMPUTING, 2022, 13 (04) : 1941 - 1958
  • [43] Cross-subject EEG-based emotion recognition through dynamic optimization of random forest with sparrow search algorithm
    Zhang X.
    Wang S.
    Xu K.
    Zhao R.
    She Y.
    Mathematical Biosciences and Engineering, 2024, 21 (03) : 4779 - 4800
  • [44] CiABL: Completeness-Induced Adaptative Broad Learning for Cross-Subject Emotion Recognition With EEG and Eye Movement Signals
    Gong, Xinrong
    Chen, C. L. Philip
    Hu, Bin
    Zhang, Tong
    IEEE TRANSACTIONS ON AFFECTIVE COMPUTING, 2024, 15 (04) : 1970 - 1984
  • [45] Gusa: Graph-Based Unsupervised Subdomain Adaptation for Cross-Subject EEG Emotion Recognition
    Li, Xiaojun
    Chen, C. L. Philip
    Chen, Bianna
    Zhang, Tong
    IEEE TRANSACTIONS ON AFFECTIVE COMPUTING, 2024, 15 (03) : 1451 - 1462
  • [46] Research on Wearable Emotion Recognition Based on Multi-Source Domain Adversarial Transfer Learning
    Zou Y.-P.
    Wang D.-Y.
    Wang D.
    Zheng C.-L.
    Song Q.-F.
    Zhu Y.-Z.
    Fan C.-H.
    Wu K.-S.
    Jisuanji Xuebao/Chinese Journal of Computers, 2024, 47 (02): : 266 - 286
  • [47] Multi-Source Domain Transfer Discriminative Dictionary Learning Modeling for Electroencephalogram-Based Emotion Recognition
    Gu, Xiaoqing
    Cai, Weiwei
    Gao, Ming
    Jiang, Yizhang
    Ning, Xin
    Qian, Pengjiang
    IEEE TRANSACTIONS ON COMPUTATIONAL SOCIAL SYSTEMS, 2022, 9 (06): : 1604 - 1612
  • [48] Coupled Projection Transfer Metric Learning for Cross-Session Emotion Recognition from EEG
    Shen, Fangyao
    Peng, Yong
    Dai, Guojun
    Lu, Baoliang
    Kong, Wanzeng
    SYSTEMS, 2022, 10 (02):
  • [49] Cross-subject spatial filter transfer method for SSVEP-EEG feature recognition
    Yan, Wenqiang
    Wu, Yongcheng
    Du, Chenghang
    Xu, Guanghua
    JOURNAL OF NEURAL ENGINEERING, 2022, 19 (03)
  • [50] Investigating the Use of Pretrained Convolutional Neural Network on Cross-Subject and Cross-Dataset EEG Emotion Recognition
    Cimtay, Yucel
    Ekmekcioglu, Erhan
    SENSORS, 2020, 20 (07)