Going Deeper in Facial Expression Recognition using Deep Neural Networks

被引:0
作者
Mollahosseini, Ali [1 ]
Chan, David [2 ]
Mahoor, Mohammad H. [1 ,2 ]
机构
[1] Univ Denver, Dept Elect & Comp Engn, Denver, CO 80208 USA
[2] Univ Denver, Dept Comp Sci, Denver, CO USA
来源
2016 IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV 2016) | 2016年
基金
美国国家科学基金会;
关键词
FACE;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Automated Facial Expression Recognition (FER) has remained a challenging and interesting problem in computer vision. Despite efforts made in developing various methods for FER, existing approaches lack generalizability when applied to unseen images or those that are captured in wild setting (i.e. the results are not significant). Most of the existing approaches are based on engineered features (e.g. HOG, LBPH, and Gabor) where the classifier's hyper-parameters are tuned to give best recognition accuracies across a single database, or a small collection of similar databases. This paper proposes a deep neural network architecture to address the FER problem across multiple well-known standard face datasets. Specifically, our network consists of two convolutional layers each followed by max pooling and then four Inception layers. The network is a single component architecture that takes registered facial images as the input and classifies them into either of the six basic or the neutral expressions. We conducted comprehensive experiments on seven publicly available facial expression databases, viz. MultiPIE, MMI, CK+, DISFA, FERA, SFEW, and FER2013. The results of our proposed architecture are comparable to or better than the state-of-the-art methods and better than traditional convolutional neural networks in both accuracy and training time.
引用
收藏
页数:10
相关论文
共 51 条
  • [1] [Anonymous], 1983, University of California at San Francisco
  • [2] [Anonymous], Environmental Psychology & Nonverbal Behavior
  • [3] [Anonymous], 2007, Handbook of Emotion Elicitation and Assessment, DOI DOI 10.1007/978-3-540-72348-6_1
  • [4] Arora S., 2013, Provable bounds for learning some deep representations
  • [5] Emotion Recognition in Children with Autism Spectrum Disorders: Relations to Eye Gaze and Autonomic State
    Bal, Elgiz
    Harden, Emily
    Lamb, Damon
    Van Hecke, Amy Vaughan
    Denver, John W.
    Porges, Stephen W.
    [J]. JOURNAL OF AUTISM AND DEVELOPMENTAL DISORDERS, 2010, 40 (03) : 358 - 370
  • [6] Banziger T., 2010, Blueprint for affective computing: A sourcebook, P271, DOI DOI 10.1037/A0025827
  • [7] De la Torre F., 2011, VISUAL ANAL HUMANS L, V2011, P377
  • [8] Deng J, 2009, PROC CVPR IEEE, P248, DOI 10.1109/CVPRW.2009.5206848
  • [9] Dhall A, 2011, 2011 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION WORKSHOPS (ICCV WORKSHOPS)
  • [10] Emotion Recognition In The Wild Challenge 2013
    Dhall, Abhinav
    Goecke, Roland
    Joshi, Jyoti
    Wagner, Michael
    Gedeon, Tom
    [J]. ICMI'13: PROCEEDINGS OF THE 2013 ACM INTERNATIONAL CONFERENCE ON MULTIMODAL INTERACTION, 2013, : 509 - 515