Research on Multimodal Emotion Recognition Based on Fusion of Electroencephalogram and Electrooculography

被引:4
作者
Yin, Jialai [1 ,2 ]
Wu, Minchao [3 ]
Yang, Yan [4 ]
Li, Ping [3 ]
Li, Fan [5 ]
Liang, Wen [6 ]
Lv, Zhao [5 ,7 ]
机构
[1] Anhui Univ, AHU IAI AI Joint Lab, Hefei 230039, Peoples R China
[2] Hefei Comprehens Natl Sci Ctr, Inst Artificial Intelligence, Hefei 230088, Peoples R China
[3] Anhui Univ, Lab Intelligent Informat & Human Comp Interact, Hefei 230039, Peoples R China
[4] Chinese Acad Sci, Inst Biophys IBP, State Key Lab Brain & Cognit Sci, Beijing 100101, Peoples R China
[5] Civil Aviat Flight Univ China, Key Lab Flight Tech & Flight Safety, Guanghan 618307, Peoples R China
[6] Google Res, Mountain View, CA 94043 USA
[7] Anhui Univ, Sch Comp Sci & Technol, Anhui Prov Key Lab Multimodal Cognit Computat, Hefei 230601, Peoples R China
基金
中国国家自然科学基金;
关键词
Electroencephalography; Feature extraction; Emotion recognition; Brain modeling; Transformers; Electrooculography; Time-domain analysis; Attention mechanisms; electroencephalogram (EEG) and electrooculography (EOG); multimodal; olfactory video emotion evocation; transformer; FEATURE-EXTRACTION;
D O I
10.1109/TIM.2024.3370813
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Emotion recognition plays a vital role in building a harmonious society and emotional interaction. Recent research has demonstrated that multimodal interchannel correlations and insufficient emotion elicitation plague deep learning-based emotion identification techniques. To cope with these problems, we propose a multimodal and channel attention fusion transformer (MCAF-Transformer). First, we employ an olfactory video approach to evoke emotional expression more fully and acquire electroencephalogram (EEG) and electrooculography (EOG) signal data. Second, the model makes full use of multimodal channel information, time-domain and spatial-domain information of EEG and EOG signals, captures the correlation of different channels using channel attention, and improves the accuracy of emotion recognition by focusing on the global dependence on the temporal order using the transformer. We conducted extensive experiments on the olfactory video sentiment dataset, and the experimental results were correct at 94.63%. The results show that olfactory videos evoke emotion more adequately than pure videos and that the MCAF-Transformer model significantly outperforms other emotion recognition methods.
引用
收藏
页码:1 / 12
页数:12
相关论文
共 45 条
  • [1] Alhagry S, 2017, INT J ADV COMPUT SC, V8, P355, DOI 10.14569/IJACSA.2017.081046
  • [2] Human emotion recognition and analysis in response to audio music using brain signals
    Bhatti, Adnan Mehmood
    Majid, Muhammad
    Anwar, Syed Muhammad
    Khan, Bilal
    [J]. COMPUTERS IN HUMAN BEHAVIOR, 2016, 65 : 267 - 275
  • [3] Traffic transformer: Capturing the continuity and periodicity of time series for traffic forecasting
    Cai, Ling
    Janowicz, Krzysztof
    Mai, Gengchen
    Yan, Bo
    Zhu, Rui
    [J]. TRANSACTIONS IN GIS, 2020, 24 (03) : 736 - 755
  • [4] Short-term emotion assessment in a recall paradigm
    Chanel, Guillaume
    Kierkels, Joep J. M.
    Soleymani, Mohammad
    Pun, Thierry
    [J]. INTERNATIONAL JOURNAL OF HUMAN-COMPUTER STUDIES, 2009, 67 (08) : 607 - 627
  • [5] SCA-CNN: Spatial and Channel-wise Attention in Convolutional Networks for Image Captioning
    Chen, Long
    Zhang, Hanwang
    Xiao, Jun
    Nie, Liqiang
    Shao, Jian
    Liu, Wei
    Chua, Tat-Seng
    [J]. 30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, : 6298 - 6306
  • [6] Video Emotion Recognition in the Wild Based on Fusion of Multimodal Features
    Chen, Shizhe
    Li, Xinrui
    Jin, Qin
    Zhang, Shilei
    Qin, Yong
    [J]. ICMI'16: PROCEEDINGS OF THE 18TH ACM INTERNATIONAL CONFERENCE ON MULTIMODAL INTERACTION, 2016, : 494 - 500
  • [7] Collobert R, 2006, J MACH LEARN RES, V7, P1687
  • [8] An Ensemble Learning Approach for Electrocardiogram Sensor Based Human Emotion Recognition
    Dissanayake, Theekshana
    Rajapaksha, Yasitha
    Ragel, Roshan
    Nawinne, Isuru
    [J]. SENSORS, 2019, 19 (20)
  • [9] A Multi-Dimensional Graph Convolution Network for EEG Emotion Recognition
    Du, Guanglong
    Su, Jinshao
    Zhang, Linlin
    Su, Kang
    Wang, Xueqian
    Teng, Shaohua
    Liu, Peter Xiaoping
    [J]. IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, 2022, 71
  • [10] Figueiredo GR, 2019, IEEE ENG MED BIO, P5419, DOI [10.1109/embc.2019.8857878, 10.1109/EMBC.2019.8857878]