Recognition of emotions from video using acoustic and facial features

被引:8
作者
Rao, K. Sreenivasa [1 ]
Koolagudi, Shashidhar G. [2 ]
机构
[1] Indian Inst Technol, Sch Informat Technol, Kharagpur 721302, W Bengal, India
[2] Natl Inst Technol Karnataka, Dept Comp Sci & Engn, Surathkal 575025, Karnataka, India
关键词
Emotion recognition; Autoassociative neural network (AANN); Spectral and prosodic features; Facial features; Acoustic features; VOICE CONVERSION; SPEECH; EXPRESSION; FACE;
D O I
10.1007/s11760-013-0522-6
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
In this paper, acoustic and facial features extracted from video are explored for recognizing emotions. The temporal variation of gray values of the pixels within eye and mouth regions is used as a feature to capture the emotion-specific knowledge from the facial expressions. Acoustic features representing spectral and prosodic information are explored for recognizing emotions from the speech signal. Autoassociative neural network models are used to capture the emotion-specific information from acoustic and facial features. The basic objective of this work is to examine the capability of the proposed acoustic and facial features in view of capturing the emotion-specific information. Further, the correlations among the feature sets are analyzed by combining the evidences at different levels. The performance of the emotion recognition system developed using acoustic and facial features is observed to be 85.71 and 88.14 %, respectively. It has been observed that combining the evidences of models developed using acoustic and facial features improved the recognition performance to 93.62 %. The performance of the emotion recognition systems developed using neural network models is compared with hidden Markov models, Gaussian mixture models and support vector machine models. The proposed features and models are evaluated on real-life emotional database, Interactive Emotional Dyadic Motion Capture database, which was recently collected at University of Southern California.
引用
收藏
页码:1029 / 1045
页数:17
相关论文
共 45 条
[1]  
[Anonymous], P EUSIPCO
[2]  
[Anonymous], P EUR
[3]  
[Anonymous], ATR WORKSH VIRT COMM
[4]  
[Anonymous], IEEE CVPR WORKSH COM
[5]  
[Anonymous], 2000, P ISCA TUT RES WORKS
[6]  
Ashraf A.B., 2007, P ICMI, P1788
[7]  
Bartlett M, 2008, LECT NOTES ARTIF INT, V5042, P1
[8]   Analysis of Emotionally Salient Aspects of Fundamental Frequency for Emotion Detection [J].
Busso, Carlos ;
Lee, Sungbok ;
Narayanan, Shrikanth .
IEEE TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING, 2009, 17 (04) :582-596
[9]   IEMOCAP: interactive emotional dyadic motion capture database [J].
Busso, Carlos ;
Bulut, Murtaza ;
Lee, Chi-Chun ;
Kazemzadeh, Abe ;
Mower, Emily ;
Kim, Samuel ;
Chang, Jeannette N. ;
Lee, Sungbok ;
Narayanan, Shrikanth S. .
LANGUAGE RESOURCES AND EVALUATION, 2008, 42 (04) :335-359
[10]   Emotion recognition in human-computer interaction [J].
Cowie, R ;
Douglas-Cowie, E ;
Tsapatsoulis, N ;
Votsis, G ;
Kollias, S ;
Fellenz, W ;
Taylor, JG .
IEEE SIGNAL PROCESSING MAGAZINE, 2001, 18 (01) :32-80