Fusion of facial expressions and EEG for implicit affective tagging

被引:152
作者
Koelstra, Sander [1 ]
Patras, Ioannis [1 ]
机构
[1] QMUL, Sch Comp Sci & Elect Engn, London E1 4NS, England
基金
英国工程与自然科学研究理事会;
关键词
Emotion classification; EEG; Facial expressions; Signal processing; Pattern classification; Affective computing; RECOGNITION; EMOTION;
D O I
10.1016/j.imavis.2012.10.002
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The explosion of user-generated, untagged multimedia data in recent years, generates a strong need for efficient search and retrieval of this data. The predominant method for content-based tagging is through slow, labor-intensive manual annotation. Consequently, automatic tagging is currently a subject of intensive research. However, it is clear that the process will not be fully automated in the foreseeable future. We propose to involve the user and investigate methods for implicit tagging, wherein users' responses to the interaction with the multimedia content are analyzed in order to generate descriptive tags. Here, we present a multi-modal approach that analyses both facial expressions and electroencephalography (EEG) signals for the generation of affective tags. We perform classification and regression in the valence-arousal space and present results for both feature-level and decision-level fusion. We demonstrate improvement in the results when using both modalities, suggesting the modalities contain complementary information. (C) 2012 Elsevier B.V. All rights reserved.
引用
收藏
页码:164 / 174
页数:11
相关论文
共 44 条
[1]  
[Anonymous], 2002, Manual and Investigators Guide
[2]  
[Anonymous], P AFF BRAIN COMP INT
[3]  
[Anonymous], 2005, Technical Report A-6
[4]  
Arapakis I., 2009, Proceedings of the 17th ACM International Conference on Multimedia, P461, DOI DOI 10.1145/1631272.1631336
[5]   Multimodal fusion for multimedia analysis: a survey [J].
Atrey, Pradeep K. ;
Hossain, M. Anwar ;
El Saddik, Abdulmotaleb ;
Kankanhalli, Mohan S. .
MULTIMEDIA SYSTEMS, 2010, 16 (06) :345-379
[6]   EEG differences between eyes-closed and eyes-open resting conditions [J].
Barry, Robert J. ;
Clarke, Adam R. ;
Johnstone, Stuart J. ;
Magee, Christopher A. ;
Rushby, Jacqueline A. .
CLINICAL NEUROPHYSIOLOGY, 2007, 118 (12) :2765-2773
[7]   AN INFORMATION MAXIMIZATION APPROACH TO BLIND SEPARATION AND BLIND DECONVOLUTION [J].
BELL, AJ ;
SEJNOWSKI, TJ .
NEURAL COMPUTATION, 1995, 7 (06) :1129-1159
[8]   MEASURING EMOTION - THE SELF-ASSESSMENT MANNEQUIN AND THE SEMANTIC DIFFERENTIAL [J].
BRADLEY, MM ;
LANG, PJ .
JOURNAL OF BEHAVIOR THERAPY AND EXPERIMENTAL PSYCHIATRY, 1994, 25 (01) :49-59
[9]   Short-term emotion assessment in a recall paradigm [J].
Chanel, Guillaume ;
Kierkels, Joep J. M. ;
Soleymani, Mohammad ;
Pun, Thierry .
INTERNATIONAL JOURNAL OF HUMAN-COMPUTER STUDIES, 2009, 67 (08) :607-627
[10]   Brainwave-based imagery analysis [J].
Cowell, Andrew J. ;
Hale, Kelly ;
Berka, Chris ;
Fuchs, Sven ;
Baskin, Angela ;
Jones, David ;
Davis, Gene ;
Johnson, Robin ;
Fatch, Robin ;
Marshall, Eric .
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 2008, 4650 :17-27