MTLFuseNet: A novel emotion recognition model based on deep latent feature fusion of EEG signals and multi-task learning

被引:22
|
作者
Li, Rui [1 ]
Ren, Chao [1 ]
Ge, Yiqing [1 ]
Zhao, Qiqi [1 ]
Yang, Yikun [1 ]
Shi, Yuhan [1 ]
Zhang, Xiaowei [1 ]
Hu, Bin [1 ]
机构
[1] Lanzhou Univ, Sch Informat Sci & Engn, Gansu Prov Key Lab Wearable Comp, Lanzhou 730000, Peoples R China
基金
中国国家自然科学基金;
关键词
Emotion recognition; EEG; Feature fusion; Multi -task learning; NEURAL-NETWORK; SYSTEM; LSTM;
D O I
10.1016/j.knosys.2023.110756
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
How to extract discriminative latent feature representations from electroencephalography (EEG) signals and build a generalized model is a topic in EEG-based emotion recognition research. This study proposed a novel emotion recognition model based on deep latent feature fusion of EEG signals and multi-task learning, referred to as MTLFuseNet. MTLFuseNet learned spatio-temporal latent features of EEG in an unsupervised manner by a variational autoencoder (VAE) and learned the spatio-spectral features of EEG in a supervised manner by a graph convolutional network (GCN) and gated recurrent unit (GRU) network. Afterward, the two latent features were fused to form more complementary and discriminative spatio-temporal-spectral fusion features for EEG signal representation. In addition, MTLFuseNet was constructed based on multi-task learning. The focal loss was introduced to solve the problem of unbalanced sample classes in an emotional dataset, and the triplet-center loss was introduced to make the fused latent feature vectors more discriminative. Finally, a subject-independent leave-one-subject-out cross-validation strategy was used to validate extensively on two public datasets, DEAP and DREAMER. On the DEAP dataset, the average accuracies of valence and arousal are 71.33% and 73.28%, respectively. On the DREAMER dataset, the average accuracies of valence and arousal are 80.43% and 83.33%, respectively. The experimental results show that the proposed MTLFuseNet model achieves excellent recognition performance, outperforming the state-of-the-art methods.& COPY; 2023 Elsevier B.V. All rights reserved.
引用
收藏
页数:16
相关论文
共 50 条
  • [1] A multi-task hybrid emotion recognition network based on EEG signals
    Zhou, Qiaoli
    Shi, Chi
    Du, Qiang
    Ke, Li
    BIOMEDICAL SIGNAL PROCESSING AND CONTROL, 2023, 86
  • [2] Machining feature recognition based on a novel multi-task deep learning network
    Zhang, Hang
    Zhang, Shusheng
    Zhang, Yajun
    Liang, Jiachen
    Wang, Zhen
    ROBOTICS AND COMPUTER-INTEGRATED MANUFACTURING, 2022, 77
  • [3] Effectiveness of multi-task deep learning framework for EEG-based emotion and context recognition
    Choo, Sanghyun
    Park, Hoonseok
    Kim, Sangyeon
    Park, Donghyun
    Jung, Jae-Yoon
    Lee, Sangwon
    Nam, Chang S.
    EXPERT SYSTEMS WITH APPLICATIONS, 2023, 227
  • [4] Multi-analysis domain feature fusion of EEG emotion recognition based on integrated deep learning model
    Chao H.
    Liu Y.-L.
    Lian W.-F.
    Kongzhi yu Juece/Control and Decision, 2020, 35 (07): : 1674 - 1680
  • [5] Multi-task Feature Learning for EEG-based Emotion Recognition Using Group Nonnegative Matrix Factorization
    Hajlaoui, Ayoub
    Chetouani, Mohamed
    Essid, Slim
    2018 26TH EUROPEAN SIGNAL PROCESSING CONFERENCE (EUSIPCO), 2018, : 91 - 95
  • [6] Speech Emotion Recognition based on Multi-Task Learning
    Zhao, Huijuan
    Han Zhijie
    Wang, Ruchuan
    2019 IEEE 5TH INTL CONFERENCE ON BIG DATA SECURITY ON CLOUD (BIGDATASECURITY) / IEEE INTL CONFERENCE ON HIGH PERFORMANCE AND SMART COMPUTING (HPSC) / IEEE INTL CONFERENCE ON INTELLIGENT DATA AND SECURITY (IDS), 2019, : 186 - 188
  • [7] CDBA: a novel multi-branch feature fusion model for EEG-based emotion recognition
    Huang, Zhentao
    Ma, Yahong
    Su, Jianyun
    Shi, Hangyu
    Jia, Shanshan
    Yuan, Baoxi
    Li, Weisu
    Geng, Jingzhi
    Yang, Tingting
    FRONTIERS IN PHYSIOLOGY, 2023, 14
  • [8] Emotion recognition from EEG based on multi-task learning with capsule network and attention mechanism
    Li, Chang
    Wang, Bin
    Zhang, Silin
    Liu, Yu
    Song, Rencheng
    Cheng, Juan
    Chen, Xun
    COMPUTERS IN BIOLOGY AND MEDICINE, 2022, 143
  • [9] Speech Emotion Recognition with Multi-task Learning
    Cai, Xingyu
    Yuan, Jiahong
    Zheng, Renjie
    Huang, Liang
    Church, Kenneth
    INTERSPEECH 2021, 2021, : 4508 - 4512
  • [10] Multi-Modal Emotion Recognition Based On deep Learning Of EEG And Audio Signals
    Li, Zhongjie
    Zhang, Gaoyan
    Dang, Jianwu
    Wang, Longbiao
    Wei, Jianguo
    2021 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2021,