SFT-SGAT: A semi-supervised fine-tuning self-supervised graph attention network for emotion recognition and consciousness detection

被引:1
|
作者
Qiu, Lina [1 ,2 ]
Zhong, Liangquan [1 ]
Li, Jianping [1 ]
Feng, Weisen [1 ]
Zhou, Chengju [1 ]
Pan, Jiahui [1 ]
机构
[1] South China Normal Univ, Sch Artificial Intelligence, Guangzhou 510630, Peoples R China
[2] South China Normal Univ, Res Stn Math, Guangzhou 510630, Peoples R China
基金
中国国家自然科学基金;
关键词
Emotion recognition; Cross-subject; Semi-supervised; Self-supervised; Graph attention network; EEG; BRAIN; DISORDERS; DEFICITS;
D O I
10.1016/j.neunet.2024.106643
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Emotional recognition is highly important in the field of brain-computer interfaces (BCIs). However, due to the individual variability in electroencephalogram (EEG) signals and the challenges in obtaining accurate emotional labels, traditional methods have shown poor performance in cross-subject emotion recognition. In this study, we propose a cross-subject EEG emotion recognition method based on a semi-supervised finetuning self-supervised graph attention network (SFT-SGAT). First, we model multi-channel EEG signals by constructing a graph structure that dynamically captures the spatiotemporal topological features of EEG signals. Second, we employ a self-supervised graph attention neural network to facilitate model training, mitigating the impact of signal noise on the model. Finally, a semi-supervised approach is used to fine-tune the model, enhancing its generalization ability in cross-subject classification. By combining supervised and unsupervised learning techniques, the SFT-SGAT maximizes the utility of limited labeled data in EEG emotion recognition tasks, thereby enhancing the model's performance. Experiments based on leave-one-subject-out cross-validation demonstrate that SFT-SGAT achieves state-of-the-art cross-subject emotion recognition performance on the SEED and SEED-IV datasets, with accuracies of 92.04% and 82.76%, respectively. Furthermore, experiments conducted on a self-collected dataset comprising ten healthy subjects and eight patients with disorders of consciousness (DOCs) revealed that the SFT-SGAT attains high classification performance in healthy subjects (maximum accuracy of 95.84%) and was successfully applied to DOC patients, with four patients achieving emotion recognition accuracies exceeding 60%. The experiments demonstrate the effectiveness of the proposed SFT-SGAT model in cross-subject EEG emotion recognition and its potential for assessing levels of consciousness in patients with DOC.
引用
收藏
页数:11
相关论文
共 50 条
  • [31] Neighborhood-Aware Attention Network for Semi-supervised Face Recognition
    Zhang, Qi
    Lei, Zhen
    Li, Stan Z.
    2020 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2020,
  • [32] CopGAT: Co-propagation Self-supervised Graph Attention Network
    Zhang, Baoming
    Xu, Ming
    Chen, Mingcai
    Chen, Mingyuan
    Wang, Chongjun
    2022 IEEE INTL CONF ON PARALLEL & DISTRIBUTED PROCESSING WITH APPLICATIONS, BIG DATA & CLOUD COMPUTING, SUSTAINABLE COMPUTING & COMMUNICATIONS, SOCIAL COMPUTING & NETWORKING, ISPA/BDCLOUD/SOCIALCOM/SUSTAINCOM, 2022, : 18 - 25
  • [33] Censer: Curriculum Semi-supervised Learning for Speech Recognition Based on Self-supervised Pre-training
    Zhang, Bowen
    Cao, Songjun
    Zhang, Xiaoming
    Zhang, Yike
    Ma, Long
    Shinozaki, Takahiro
    INTERSPEECH 2022, 2022, : 2653 - 2657
  • [34] Rethinking Pseudo-Labeling for Semi-Supervised Facial Expression Recognition With Contrastive Self-Supervised Learning
    Fang, Bei
    Li, Xian
    Han, Guangxin
    He, Juhou
    IEEE ACCESS, 2023, 11 : 45547 - 45558
  • [35] S 3 Net: Self-Supervised Self-Ensembling Network for Semi-Supervised RGB-D Salient Object Detection
    Zhu, Lei
    Wang, Xiaoqiang
    Li, Ping
    Yang, Xin
    Zhang, Qing
    Wang, Weiming
    Schonlieb, Carola-Bibiane
    Chen, C. L. Philip
    IEEE TRANSACTIONS ON MULTIMEDIA, 2023, 25 : 676 - 689
  • [36] Evaluation of a semi-supervised self-adjustment fine-tuning procedure for hearing aids for asymmetrical hearing loss
    Goesswein, Jonathan Albert
    Chalupper, Josef
    Kohl, Manuel
    Kinkel, Martin
    Kollmeier, Birger
    Rennies, Jan
    INTERNATIONAL JOURNAL OF AUDIOLOGY, 2024,
  • [37] Jointly Fine-Tuning "BERT-like" Self Supervised Models to Improve Multimodal Speech Emotion Recognition
    Siriwardhana, Shamane
    Reis, Andrew
    Weerasekera, Rivindu
    Nanayakkara, Suranga
    INTERSPEECH 2020, 2020, : 3755 - 3759
  • [38] Automatic Data Augmentation for Domain Adapted Fine-Tuning of Self-Supervised Speech Representations
    Zaiem, Salah
    Parcollet, Titouan
    Essid, Slim
    INTERSPEECH 2023, 2023, : 67 - 71
  • [39] Self-supervised Fine-tuning for Improved Content Representations by Speaker-invariant Clustering
    Chang, Heng-Jui
    Liu, Alexander H.
    Glass, James
    INTERSPEECH 2023, 2023, : 2983 - 2987
  • [40] Fine-Tuning for Bayer Demosaicking Through Periodic-Consistent Self-Supervised Learning
    Liu, Chang
    He, Songze
    Xu, Jiajun
    Li, Jia
    IEEE SIGNAL PROCESSING LETTERS, 2024, 31 : 989 - 993