DA-CapsNet: A multi-branch capsule network based on adversarial domain adaption for cross-subject EEG emotion recognition

被引:31
作者
Liu, Shuaiqi [1 ,2 ,3 ]
Wang, Zeyao [1 ,4 ]
An, Yanling [5 ]
Li, Bing [3 ]
Wang, Xinrui [1 ,4 ]
Zhang, Yudong [6 ]
机构
[1] Hebei Univ, Coll Elect & Informat Engn, Baoding 071000, Hebei, Peoples R China
[2] Machine Vis Technol Innovat Ctr Hebei Prov, Baoding 071000, Peoples R China
[3] Chinese Acad Sci, Inst Automat, State Key Lab Multimodal Artificial Intelligence S, Beijing 100190, Peoples R China
[4] Key Lab Digital Med Engn Hebei Prov, Baoding 071002, Peoples R China
[5] Beijing Jiaotong Univ, Inst Informat Sci, Beijing 100044, Peoples R China
[6] Univ Leicester, Sch Comp & Math, Leicester LE1 7RH, England
基金
中国国家自然科学基金;
关键词
EEG emotion recognition; Capsule network; Adversarial domain adaptation; Transfer learning;
D O I
10.1016/j.knosys.2023.111137
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Due to inter-individual variances, cross-subject electroencephalogram (EEG)-based emotion recognition is a challenging task. In this paper, we construct a multi-branch Capsule network (named DA-CapsNet) based on domain adaptation to improve the performance of cross-subject EEG emotion recognition. To fully capture the various intensity characteristics of a single emotion, firstly, DA-CapsNet decomposes the source and the target domain EEG signals into four frequency bands and homomorphically groups the data in each band, and then extracts the differential entropy (DE) features for each group separately. Taking into account the spatial arrangement of the electrodes, the DE features are mapped into a two-dimensional matrix to form a homomorphic difference cube sequence (HDCS). Second, to enhance the feature information of the same emotion and accelerate the run efficiency of the network, a parallel structured multi-branch primary Capsual network (CapsNet) is constructed in this paper. The multi-branch primary CapsNet can effectively extract the aforementioned sequence discriminative features and fuse them as the input features of the capsule emotion classifier. Finally, to lessen inter-domain distribution discrepancies, we brought adversarial domain adaptation to improve the performance of cross-subject emotion recognition. Numerous tests are run on the three public datasets of EEG, and the results show that the proposed algorithm in this paper works well.
引用
收藏
页数:12
相关论文
共 41 条
  • [11] JOINT TEMPORAL CONVOLUTIONAL NETWORKS AND ADVERSARIAL DISCRIMINATIVE DOMAIN ADAPTATION FOR EEG-BASED CROSS-SUBJECT EMOTION RECOGNITION
    He, Zhipeng
    Zhong, Yongshi
    Pan, Jiahui
    [J]. 2022 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2022, : 3214 - 3218
  • [12] Hinton GE, 2018, INT C LEARN REPR, P1, DOI DOI 10.2514/1.562
  • [13] Huang D, 2012, IEEE IJCNN
  • [14] Capsule neural networks on spatio-temporal EEG frames for cross-subject emotion recognition
    Jana, Gopal Chandra
    Sabath, Anshuman
    Agrawal, Anupam
    [J]. BIOMEDICAL SIGNAL PROCESSING AND CONTROL, 2022, 72
  • [15] Affective Body Expression Perception and Recognition: A Survey
    Kleinsmith, Andrea
    Bianchi-Berthouze, Nadia
    [J]. IEEE TRANSACTIONS ON AFFECTIVE COMPUTING, 2013, 4 (01) : 15 - 33
  • [16] Li C., 2022, IEEE Trans. Cogn. Dev. Syst., P1
  • [17] Emotion recognition from EEG based on multi-task learning with capsule network and attention mechanism
    Li, Chang
    Wang, Bin
    Zhang, Silin
    Liu, Yu
    Song, Rencheng
    Cheng, Juan
    Chen, Xun
    [J]. COMPUTERS IN BIOLOGY AND MEDICINE, 2022, 143
  • [18] Li J., 2019, IEEE Trans. Cybern., V50, P2168
  • [19] MTLFuseNet: A novel emotion recognition model based on deep latent feature fusion of EEG signals and multi-task learning
    Li, Rui
    Ren, Chao
    Ge, Yiqing
    Zhao, Qiqi
    Yang, Yikun
    Shi, Yuhan
    Zhang, Xiaowei
    Hu, Bin
    [J]. KNOWLEDGE-BASED SYSTEMS, 2023, 276
  • [20] GMSS: Graph-Based Multi-Task Self-Supervised Learning for EEG Emotion Recognition
    Li, Yang
    Chen, Ji
    Li, Fu
    Fu, Boxun
    Wu, Hao
    Ji, Youshuo
    Zhou, Yijin
    Niu, Yi
    Shi, Guangming
    Zheng, Wenming
    [J]. IEEE TRANSACTIONS ON AFFECTIVE COMPUTING, 2023, 14 (03) : 2512 - 2525