Cross-Modal Guiding Neural Network for Multimodal Emotion Recognition From EEG and Eye Movement Signals

被引:0
|
作者
Fu, Baole [1 ,2 ]
Chu, Wenhao [1 ,2 ]
Gu, Chunrui [1 ,2 ]
Liu, Yinhua [1 ,2 ,3 ]
机构
[1] Qingdao Univ, Inst Future, Qingdao 266071, Peoples R China
[2] Qingdao Univ, Sch Automat, Qingdao 266071, Peoples R China
[3] Qingdao Univ, Shandong Prov Key Lab Ind Control Technol, Qingdao 266071, Peoples R China
关键词
Feature extraction; Electroencephalography; Emotion recognition; Brain modeling; Videos; Convolution; Accuracy; Multimodal emotion recognition; electroencephalogram (EEG); convolutional neural network (CNN); cross-modal guidance; feature selection;
D O I
10.1109/JBHI.2024.3419043
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Multimodal emotion recognition research is gaining attention because of the emerging trend of integrating information from different sensory modalities to improve performance. Electroencephalogram (EEG) signals are considered objective indicators of emotions and provide precise insights despite their complex data collection. In contrast, eye movement signals are more susceptible to environmental and individual differences but offer convenient data collection. Conventional emotion recognition methods typically use separate models for different modalities, potentially overlooking their inherent connections. This study introduces a cross-modal guiding neural network designed to fully leverage the strengths of both modalities. The network includes a dual-branch feature extraction module that simultaneously extracts features from EEG and eye movement signals. In addition, the network includes a feature guidance module that uses EEG features to direct eye movement feature extraction, reducing the impact of subjective factors. This study also introduces a feature reweighting module to explore emotion-related features within eye movement signals, thereby improving emotion classification accuracy. The empirical findings from both the SEED-IV dataset and our collected dataset substantiate the commendable performance of the model, thereby confirming its efficacy.
引用
收藏
页码:5865 / 5876
页数:12
相关论文
共 50 条
  • [41] Emotion Recognition from EEG Signals Using Recurrent Neural Networks
    Chowdary, M. Kalpana
    Anitha, J.
    Hemanth, D. Jude
    ELECTRONICS, 2022, 11 (15)
  • [42] Emotion Recognition in Speech using Cross-Modal Transfer in the Wild
    Albanie, Samuel
    Nagrani, Arsha
    Vedaldi, Andrea
    Zisserman, Andrew
    PROCEEDINGS OF THE 2018 ACM MULTIMEDIA CONFERENCE (MM'18), 2018, : 292 - 301
  • [43] Contextual and Cross-Modal Interaction for Multi-Modal Speech Emotion Recognition
    Yang, Dingkang
    Huang, Shuai
    Liu, Yang
    Zhang, Lihua
    IEEE SIGNAL PROCESSING LETTERS, 2022, 29 : 2093 - 2097
  • [44] CiABL: Completeness-Induced Adaptative Broad Learning for Cross-Subject Emotion Recognition With EEG and Eye Movement Signals
    Gong, Xinrong
    Chen, C. L. Philip
    Hu, Bin
    Zhang, Tong
    IEEE TRANSACTIONS ON AFFECTIVE COMPUTING, 2024, 15 (04) : 1970 - 1984
  • [45] Speech Emotion Recognition With Early Visual Cross-modal Enhancement Using Spiking Neural Networks
    Mansouri-Benssassi, Esma
    Ye, Juan
    2019 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2019,
  • [46] CFN-ESA: A Cross-Modal Fusion Network With Emotion-Shift Awareness for Dialogue Emotion Recognition
    Li J.
    Wang X.
    Liu Y.
    Zeng Z.
    IEEE Transactions on Affective Computing, 2024, 15 (04): : 1 - 16
  • [47] Multimodal Attention Network for Continuous-Time Emotion Recognition Using Video and EEG Signals
    Choi, Dong Yoon
    Kim, Deok-Hwan
    Song, Byung Cheol
    IEEE ACCESS, 2020, 8 : 203814 - 203826
  • [48] An Efficient LSTM Network for Emotion Recognition From Multichannel EEG Signals
    Du, Xiaobing
    Ma, Cuixia
    Zhang, Guanhua
    Li, Jinyao
    Lai, Yu-Kun
    Zhao, Guozhen
    Deng, Xiaoming
    Liu, Yong-Jin
    Wang, Hongan
    IEEE TRANSACTIONS ON AFFECTIVE COMPUTING, 2022, 13 (03) : 1528 - 1540
  • [49] Simplifying Multimodal Emotion Recognition with Single Eye Movement Modality
    Yan, Xu
    Zhao, Li-Ming
    Lu, Bao-Liang
    PROCEEDINGS OF THE 29TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2021, 2021, : 1057 - 1063
  • [50] Cross-Modal Attention Network for Detecting Multimodal Misinformation From Multiple Platforms
    Guo, Zhiwei
    Li, Yang
    Yang, Zhenguo
    Li, Xiaoping
    Lee, Lap-Kei
    Li, Qing
    Liu, Wenyin
    IEEE TRANSACTIONS ON COMPUTATIONAL SOCIAL SYSTEMS, 2024, 11 (04) : 4920 - 4933