A novel signal channel attention network for multi-modal emotion recognition

被引:1
|
作者
Du, Ziang [1 ]
Ye, Xia [1 ]
Zhao, Pujie [1 ]
机构
[1] Xian Res Inst High Tech, Xian, Shaanxi, Peoples R China
来源
FRONTIERS IN NEUROROBOTICS | 2024年 / 18卷
关键词
hypercomplex neural networks; physiological signals; attention fusion module; multi-modal fusion; emotion recognition;
D O I
10.3389/fnbot.2024.1442080
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Physiological signal recognition is crucial in emotion recognition, and recent advancements in multi-modal fusion have enabled the integration of various physiological signals for improved recognition tasks. However, current models for emotion recognition with hyper complex multi-modal signals face limitations due to fusion methods and insufficient attention mechanisms, preventing further enhancement in classification performance. To address these challenges, we propose a new model framework named Signal Channel Attention Network (SCA-Net), which comprises three main components: an encoder, an attention fusion module, and a decoder. In the attention fusion module, we developed five types of attention mechanisms inspired by existing research and performed comparative experiments using the public dataset MAHNOB-HCI. All of these experiments demonstrate the effectiveness of the attention module we addressed for our baseline model in improving both accuracy and F1 score metrics. We also conducted ablation experiments within the most effective attention fusion module to verify the benefits of multi-modal fusion. Additionally, we adjusted the training process for different attention fusion modules by employing varying early stopping parameters to prevent model overfitting.
引用
收藏
页数:11
相关论文
共 50 条
  • [31] Continuous Multi-modal Emotion Prediction in Video based on Recurrent Neural Network Variants with Attention
    Raju, Joyal
    Gaus, Yona Falinie A.
    Breckon, Toby P.
    20TH IEEE INTERNATIONAL CONFERENCE ON MACHINE LEARNING AND APPLICATIONS (ICMLA 2021), 2021, : 688 - 693
  • [32] A Multi-modal Visual Emotion Recognition Method to Instantiate an Ontology
    Heredia, Juan Pablo A.
    Cardinale, Yudith
    Dongo, Irvin
    Diaz-Amado, Jose
    PROCEEDINGS OF THE 16TH INTERNATIONAL CONFERENCE ON SOFTWARE TECHNOLOGIES (ICSOFT), 2021, : 453 - 464
  • [33] A Two-Stage Attention Based Modality Fusion Framework for Multi-Modal Speech Emotion Recognition
    Hu, Dongni
    Chen, Chengxin
    Zhang, Pengyuan
    Li, Junfeng
    Yan, Yonghong
    Zhao, Qingwei
    IEICE TRANSACTIONS ON INFORMATION AND SYSTEMS, 2021, E104D (08) : 1391 - 1394
  • [34] AFLEMP: Attention-based Federated Learning for Emotion recognition using Multi-modal Physiological data
    Gahlan, Neha
    Sethia, Divyashikha
    BIOMEDICAL SIGNAL PROCESSING AND CONTROL, 2024, 94
  • [35] Multi-Modal Recurrent Attention Networks for Facial Expression Recognition
    Lee, Jiyoung
    Kim, Sunok
    Kim, Seungryong
    Sohn, Kwanghoon
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2020, 29 : 6977 - 6991
  • [36] Multi-modal embeddings using multi-task learning for emotion recognition
    Khare, Aparna
    Parthasarathy, Srinivas
    Sundaram, Shiva
    INTERSPEECH 2020, 2020, : 384 - 388
  • [37] Emotion recognition using multi-modal data and machine learning techniques: A tutorial and review
    Zhang, Jianhua
    Yin, Zhong
    Chen, Peng
    Nichele, Stefano
    INFORMATION FUSION, 2020, 59 : 103 - 126
  • [38] Multi-Modal Fusion Emotion Recognition Method of Speech Expression Based on Deep Learning
    Liu, Dong
    Wang, Zhiyong
    Wang, Lifeng
    Chen, Longxi
    FRONTIERS IN NEUROROBOTICS, 2021, 15
  • [39] Low-level fusion of audio and video feature for multi-modal emotion recognition
    Wimmer, Matthias
    Schuller, Bjoern
    Arsic, Dejan
    Rigoll, Gerhard
    Radig, Bernd
    VISAPP 2008: PROCEEDINGS OF THE THIRD INTERNATIONAL CONFERENCE ON COMPUTER VISION THEORY AND APPLICATIONS, VOL 2, 2008, : 145 - +
  • [40] Multi-Modal Emotion Recognition for Online Education Using Emoji Prompts
    Qin, Xingguo
    Zhou, Ya
    Li, Jun
    APPLIED SCIENCES-BASEL, 2024, 14 (12):