Multimodal Emotion Recognition in Response to Videos

被引:459
作者
Soleymani, Mohammad [1 ]
Pantic, Maja [2 ,3 ]
Pun, Thierry [1 ]
机构
[1] Univ Geneva, Dept Comp Sci, Comp Vis & Multimedia Lab, CH-1227 Carouge, GE, Switzerland
[2] Univ London Imperial Coll Sci Technol & Med, Dept Comp, London SW7 2AZ, England
[3] Univ Twente, Fac Elect Engn Math & Comp Sci, NL-7522 NB Enschede, Netherlands
基金
欧洲研究理事会; 瑞士国家科学基金会;
关键词
Emotion recognition; EEG; pupillary reflex; pattern classification; affective computing; PUPIL LIGHT REFLEX; CLASSIFICATION; OSCILLATIONS; SYSTEMS;
D O I
10.1109/T-AFFC.2011.37
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
This paper presents a user-independent emotion recognition method with the goal of recovering affective tags for videos using electroencephalogram (EEG), pupillary response and gaze distance. We first selected 20 video clips with extrinsic emotional content from movies and online resources. Then, EEG responses and eye gaze data were recorded from 24 participants while watching emotional video clips. Ground truth was defined based on the median arousal and valence scores given to clips in a preliminary study using an online questionnaire. Based on the participants' responses, three classes for each dimension were defined. The arousal classes were calm, medium aroused, and activated and the valence classes were unpleasant, neutral, and pleasant. One of the three affective labels of either valence or arousal was determined by classification of bodily responses. A one-participant-out cross validation was employed to investigate the classification performance in a user-independent approach. The best classification accuracies of 68.5 percent for three labels of valence and 76.4 percent for three labels of arousal were obtained using a modality fusion strategy and a support vector machine. The results over a population of 24 participants demonstrate that user-independent emotion recognition can outperform individual self-reports for arousal assessments and do not underperform for valence assessments.
引用
收藏
页码:211 / 223
页数:13
相关论文
共 54 条
  • [11] Subcortical and cortical brain activity during the feeling of self-generated emotions
    Damasio, AR
    Grabowski, TJ
    Bechara, A
    Damasio, H
    Ponto, LLB
    Parvizi, J
    Hichwa, RD
    [J]. NATURE NEUROSCIENCE, 2000, 3 (10) : 1049 - 1056
  • [12] Affective neuroscience and psychophysiology: Toward a synthesis
    Davidson, RJ
    [J]. PSYCHOPHYSIOLOGY, 2003, 40 (05) : 655 - 665
  • [13] Gao Y, 2009, LECT NOTES COMPUT SC, V5610, P49
  • [14] EMOTION ELICITATION USING FILMS
    GROSS, JJ
    LEVENSON, RW
    [J]. COGNITION & EMOTION, 1995, 9 (01) : 87 - 108
  • [15] Affective video content representation and modeling
    Hanjalic, A
    Xu, LQ
    [J]. IEEE TRANSACTIONS ON MULTIMEDIA, 2005, 7 (01) : 143 - 154
  • [16] Healey J. A., 2000, Ph.D. thesis
  • [17] Affective Audio-Visual Words and Latent Topic Driving Model for Realizing Movie Affective Scene Classification
    Irie, Go
    Satou, Takashi
    Kojima, Akira
    Yamasaki, Toshihiko
    Aizawa, Kiyoharu
    [J]. IEEE TRANSACTIONS ON MULTIMEDIA, 2010, 12 (06) : 523 - 535
  • [18] Joho H., 2010, MULTIMED TOOLS APPL, P1
  • [19] VERBAL RATE, EYEBLINK, AND CONTENT IN STRUCTURED PSYCHIATRIC INTERVIEWS
    KANFER, FH
    [J]. JOURNAL OF ABNORMAL AND SOCIAL PSYCHOLOGY, 1960, 61 (03): : 341 - 347
  • [20] Kierkels JJM, 2009, IEEE INT CON MULTI, P1436, DOI 10.1109/ICME.2009.5202772