Valence-Arousal Model based Emotion Recognition using EEG, peripheral physiological signals and Facial Expression

被引:12
作者
Zhu, Qingyang [1 ]
Lu, Guanming [1 ]
Yan, Jingjie [1 ]
机构
[1] Nanjing Univ Posts & Telecommun, Coll Telecommun & Informat Engn, Nanjing, Peoples R China
来源
ICMLSC 2020: PROCEEDINGS OF THE 4TH INTERNATIONAL CONFERENCE ON MACHINE LEARNING AND SOFT COMPUTING | 2020年
基金
中国博士后科学基金; 中国国家自然科学基金;
关键词
Emotion recognition; EEG signals; Facial expressions; Peripheral physiological signals; Decision-Level fusion; Valence-Arousal space;
D O I
10.1145/3380688.3380694
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Emotion recognition plays a particularly important role in the field of artificial intelligence. However, the emotional recognition of electroencephalogram (EEG) in the past was only a unimodal or a bimodal based on EEG. This paper aims to use deep learning to perform emotional recognition based on the multimodal with valence-arousal dimension of EEG, peripheral physiological signals, and facial expressions. The experiment uses the complete data of 18 experimenters in the Database for Emotion Analysis Using Physiological Signals (DEAP) to classify the EEG, peripheral physiological signals and facial expression video in unimodal and multimodal fusion. The experiment demonstrates that Multimodal fusion's accuracy is excelled that in unimodal and bimodal fusion. The multimodal compensates for the defects of unimodal and bimodal information sources.
引用
收藏
页码:81 / 85
页数:5
相关论文
共 23 条
  • [1] [Anonymous], 2016, 2016 IEEE 18 INT WOR
  • [2] Arapakis I., 2009, Proceedings of the 17th ACM International Conference on Multimedia, P461, DOI DOI 10.1145/1631272.1631336
  • [3] Short-term emotion assessment in a recall paradigm
    Chanel, Guillaume
    Kierkels, Joep J. M.
    Soleymani, Mohammad
    Pun, Thierry
    [J]. INTERNATIONAL JOURNAL OF HUMAN-COMPUTER STUDIES, 2009, 67 (08) : 607 - 627
  • [4] 2015 11th International Conference on Signal-Image Technology & Internet-Based Systems
    Chanthaphan, Nattawat
    Uchimura, Keiichi
    Satonaka, Takami
    Makioka, Tsuyoshi
    [J]. 2015 11TH INTERNATIONAL CONFERENCE ON SIGNAL-IMAGE TECHNOLOGY & INTERNET-BASED SYSTEMS (SITIS), 2015, : 117 - 124
  • [5] Dhall A., 2017, P 19 ACM INT C MULT, P524, DOI 10.1145/3136755.3143004
  • [6] UNIVERSALS AND CULTURAL-DIFFERENCES IN THE JUDGMENTS OF FACIAL EXPRESSIONS OF EMOTION
    EKMAN, P
    FRIESEN, WV
    OSULLIVAN, M
    CHAN, A
    DIACOYANNITARLATZIS, I
    HEIDER, K
    KRAUSE, R
    LECOMPTE, WA
    PITCAIRN, T
    RICCIBITTI, PE
    SCHERER, K
    TOMITA, M
    TZAVARAS, A
    [J]. JOURNAL OF PERSONALITY AND SOCIAL PSYCHOLOGY, 1987, 53 (04) : 712 - 717
  • [7] Learning from examples in the small sample case: Face expression recognition
    Guo, GD
    Dyer, CR
    [J]. IEEE TRANSACTIONS ON SYSTEMS MAN AND CYBERNETICS PART B-CYBERNETICS, 2005, 35 (03): : 477 - 488
  • [8] Fusion of Facial Expressions and EEG for Multimodal Emotion Recognition
    Huang, Yongrui
    Yang, Jianhao
    Liao, Pengkai
    Pan, Jiahui
    [J]. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE, 2017, 2017
  • [9] Emotion Recognition Based on Physiological Changes in Music Listening
    Kim, Jonghwa
    Andre, Elisabeth
    [J]. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2008, 30 (12) : 2067 - 2083
  • [10] Fusion of facial expressions and EEG for implicit affective tagging
    Koelstra, Sander
    Patras, Ioannis
    [J]. IMAGE AND VISION COMPUTING, 2013, 31 (02) : 164 - 174