A novel phase congruency based descriptor for dynamic facial expression analysis

被引:7
作者
Shojaeilangari, Seyedehsamaneh [1 ]
Yau, Wei-Yun [2 ]
Teoh, Eam-Khwang [1 ]
机构
[1] Nanyang Technol Univ, Sch Elect & Elect Engn, Singapore 639798, Singapore
[2] ASTAR, Inst Infocomm Res, Singapore, Singapore
关键词
Phase congruency; Spatio-temporal descriptor; Emotion recognition; Facial expression; RECOGNITION;
D O I
10.1016/j.patrec.2014.06.009
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Representation and classification of dynamic visual events in videos have been an active field of research. This work proposed a novel spatio-temporal descriptor based on phase congruency concept and applied it to recognize facial expression from video sequences. The proposed descriptor comprises histograms of dominant phase congruency over multiple 3D orientations to describe both spatial and temporal information of a dynamic event. The advantages of our proposed approach are local and dynamic processing, high accuracy, robustness to image scale variation, and illumination changes. We validated the performance of our proposed approach using the Cohn-Kanade (CK+) database where we achieved 95.44% accuracy in detecting six basic emotions. The approach was also shown to increase classification rates over the baseline results for the AVEC 2011 video subchallenge in detecting four emotion dimensions. We also validated its robustness to illumination and scale variation using our own collected dataset. (C) 2014 Elsevier B.V. All rights reserved.
引用
收藏
页码:55 / 61
页数:7
相关论文
共 31 条
[1]  
[Anonymous], 2003, 2003 C COMPUTER VISI, DOI DOI 10.1109/CVPRW.2003.10057
[2]  
[Anonymous], 2004, IEEE COMP SOC C COMP
[3]   Estimation of behavioral user state based on eye gaze and head pose-application in an e-learning environment [J].
Asteriadis, Stylianos ;
Tzouveli, Paraskevi ;
Karpouzis, Kostas ;
Kollias, Stefanos .
MULTIMEDIA TOOLS AND APPLICATIONS, 2009, 41 (03) :469-493
[4]   Recognition of facial expressions using Gabor wavelets and learning vector quantization [J].
Bashyal, Shishir ;
Venayagamoorthy, Ganesh K. .
ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2008, 21 (07) :1056-1064
[5]   Harris-SIFT Descriptor for Video Event Detection based on a Machine Learning Approach [J].
Camara-Chavez, Guillermo ;
Araujo, Arnaldo de Albuquerque .
2009 11TH IEEE INTERNATIONAL SYMPOSIUM ON MULTIMEDIA (ISM 2009), 2009, :153-+
[6]   Facial expression recognition from video sequences: temporal and static modeling [J].
Cohen, I ;
Sebe, N ;
Garg, A ;
Chen, LS ;
Huang, TS .
COMPUTER VISION AND IMAGE UNDERSTANDING, 2003, 91 (1-2) :160-187
[7]  
Cohen I., 2000, NEURAL INF PROCESS S
[8]   Feature-point tracking by optical flow discriminates subtle differences in facial expression [J].
Cohn, JF ;
Zlochower, AJ ;
Lien, JJ ;
Kanade, T .
AUTOMATIC FACE AND GESTURE RECOGNITION - THIRD IEEE INTERNATIONAL CONFERENCE PROCEEDINGS, 1998, :396-401
[9]  
Dollar P., 2005, Proceedings. 2nd Joint IEEE International Workshop on Visual Surveillance and Performance Evaluation of Tracking and Surveillance (VS-PETS) (IEEE Cat. No. 05EX1178), P65
[10]  
Glodek M, 2011, LECT NOTES COMPUT SC, V6975, P359, DOI 10.1007/978-3-642-24571-8_47