Cross-Modal Guiding Neural Network for Multimodal Emotion Recognition From EEG and Eye Movement Signals

被引:0
|
作者
Fu, Baole [1 ,2 ]
Chu, Wenhao [1 ,2 ]
Gu, Chunrui [1 ,2 ]
Liu, Yinhua [1 ,2 ,3 ]
机构
[1] Qingdao Univ, Inst Future, Qingdao 266071, Peoples R China
[2] Qingdao Univ, Sch Automat, Qingdao 266071, Peoples R China
[3] Qingdao Univ, Shandong Prov Key Lab Ind Control Technol, Qingdao 266071, Peoples R China
关键词
Feature extraction; Electroencephalography; Emotion recognition; Brain modeling; Videos; Convolution; Accuracy; Multimodal emotion recognition; electroencephalogram (EEG); convolutional neural network (CNN); cross-modal guidance; feature selection;
D O I
10.1109/JBHI.2024.3419043
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Multimodal emotion recognition research is gaining attention because of the emerging trend of integrating information from different sensory modalities to improve performance. Electroencephalogram (EEG) signals are considered objective indicators of emotions and provide precise insights despite their complex data collection. In contrast, eye movement signals are more susceptible to environmental and individual differences but offer convenient data collection. Conventional emotion recognition methods typically use separate models for different modalities, potentially overlooking their inherent connections. This study introduces a cross-modal guiding neural network designed to fully leverage the strengths of both modalities. The network includes a dual-branch feature extraction module that simultaneously extracts features from EEG and eye movement signals. In addition, the network includes a feature guidance module that uses EEG features to direct eye movement feature extraction, reducing the impact of subjective factors. This study also introduces a feature reweighting module to explore emotion-related features within eye movement signals, thereby improving emotion classification accuracy. The empirical findings from both the SEED-IV dataset and our collected dataset substantiate the commendable performance of the model, thereby confirming its efficacy.
引用
收藏
页码:5865 / 5876
页数:12
相关论文
共 50 条
  • [31] Cross-modal dynamic convolution for multi-modal emotion recognition
    Wen, Huanglu
    You, Shaodi
    Fu, Ying
    JOURNAL OF VISUAL COMMUNICATION AND IMAGE REPRESENTATION, 2021, 78
  • [32] Cross-Cultural Emotion Recognition With EEG and Eye Movement Signals Based on Multiple Stacked Broad Learning System
    Gong, Xinrong
    Chen, C. L. Philip
    Zhang, Tong
    IEEE TRANSACTIONS ON COMPUTATIONAL SOCIAL SYSTEMS, 2024, 11 (02) : 2014 - 2025
  • [33] Cross-modal integration of multimodal courtship signals in a wolf spider
    Kozak, Elizabeth C.
    Uetz, George W.
    ANIMAL COGNITION, 2016, 19 (06) : 1173 - 1181
  • [34] Cross-modal integration of multimodal courtship signals in a wolf spider
    Elizabeth C. Kozak
    George W. Uetz
    Animal Cognition, 2016, 19 : 1173 - 1181
  • [35] CoDF-Net: coordinated-representation decision fusion network for emotion recognition with EEG and eye movement signals
    Gong, Xinrong
    Dong, Yihan
    Zhang, Tong
    INTERNATIONAL JOURNAL OF MACHINE LEARNING AND CYBERNETICS, 2024, 15 (04) : 1213 - 1226
  • [36] CoDF-Net: coordinated-representation decision fusion network for emotion recognition with EEG and eye movement signals
    Xinrong Gong
    Yihan Dong
    Tong Zhang
    International Journal of Machine Learning and Cybernetics, 2024, 15 (4) : 1213 - 1226
  • [37] A Feature-Fused Convolutional Neural Network for Emotion Recognition From Multichannel EEG Signals
    Yao, Qunli
    Gu, Heng
    Wang, Shaodi
    Li, Xiaoli
    IEEE SENSORS JOURNAL, 2022, 22 (12) : 11954 - 11964
  • [38] Cross-Modal Enhancement Network for Multimodal Sentiment Analysis
    Wang, Di
    Liu, Shuai
    Wang, Quan
    Tian, Yumin
    He, Lihuo
    Gao, Xinbo
    IEEE TRANSACTIONS ON MULTIMEDIA, 2023, 25 : 4909 - 4921
  • [39] Multimodal Emotion Recognition using EEG and Eye Tracking Data
    Zheng, Wei-Long
    Dong, Bo-Nan
    Lu, Bao-Liang
    2014 36TH ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY (EMBC), 2014, : 5040 - 5043
  • [40] GraphCFC: A Directed Graph Based Cross-Modal Feature Complementation Approach for Multimodal Conversational Emotion Recognition
    Li, Jiang
    Wang, Xiaoping
    Lv, Guoqing
    Zeng, Zhigang
    IEEE TRANSACTIONS ON MULTIMEDIA, 2024, 26 : 77 - 89