Saliency-based framework for facial expression recognition

被引:0
作者
Rizwan Ahmed Khan
Alexandre Meyer
Hubert Konik
Saida Bouakaz
机构
[1] Université de Lyon,Faculty of Information Technology
[2] CNRS,Laboratoire Hubert Curien, UMR5516
[3] LIRIS,undefined
[4] UMR5205,undefined
[5] Université Lyon 1,undefined
[6] Barrett Hodgson University,undefined
[7] Université Jean Monnet,undefined
来源
Frontiers of Computer Science | 2019年 / 13卷
关键词
facial expression recognition; classification; salient regions; entropy; brightness; local binary pattern;
D O I
暂无
中图分类号
学科分类号
摘要
This article proposes a novel framework for the recognition of six universal facial expressions. The framework is based on three sets of features extracted from a face image: entropy, brightness, and local binary pattern. First, saliency maps are obtained using the state-of-the-art saliency detection algorithm “frequency-tuned salient region detection”. The idea is to use saliency maps to determine appropriate weights or values for the extracted features (i.e., brightness and entropy). We have performed a visual experiment to validate the performance of the saliency detection algorithm against the human visual system. Eye movements of 15 subjects were recorded using an eye-tracker in free-viewing conditions while they watched a collection of 54 videos selected from the Cohn-Kanade facial expression database. The results of the visual experiment demonstrated that the obtained saliency maps are consistent with the data on human fixations. Finally, the performance of the proposed framework is demonstrated via satisfactory classification results achieved with the Cohn-Kanade database, FG-NET FEED database, and Dartmouth database of children’s faces.
引用
收藏
页码:183 / 198
页数:15
相关论文
共 73 条
[1]  
Ekman P(1993)Facial expression of emotion Psychologist 48 384-392
[2]  
Bull P(2001)State of the art: nonverbal communication Psychologist 14 644-647
[3]  
Carrera-Levillain P(1994)Neutral faces in context: their emotional meaning and their function Journal of Nonverbal Behavior 18 281-299
[4]  
Fernandez-Dols J(1991)Emotion category accessibility and the decoding of emotion from facial expression and context Journal of Nonverbal Behavior 15 107-123
[5]  
Fernandez-Dols JM(2006)Dynamics of facial expression extracted automatically from video Image and Vision Computing 24 615-625
[6]  
Wallbott H(1996)A comparative study of texture measures with classification based on featured distribution Pattern Recognition 29 51-59
[7]  
Sanchez F(2007)Dynamic texture recognition using local binary patterns with an application to facial expressions IEEE Transaction on Pattern Analysis and Machine Intelligence 29 915-928
[8]  
Littlewort G(2005)Active and dynamic information fusion for facial expression understanding from image sequences IEEE Transactions on Pattern Analysis and Machine Intelligence 27 699-714
[9]  
Bartlett M S(2009)Boosted multi-resolution spatiotemporal descriptors for facial expression recognition Pattern Recognition Letters 30 1117-1127
[10]  
Fasel I(2009)Integrating a mixed-feature model and multiclass support vector machine for facial expression recognition Integrated Computer-Aided Engineering 16 61-74