Wild facial expression recognition based on incremental active learning

被引:21
作者
Ahmed, Minhaz Uddin [1 ]
Woo, Kim Jin [1 ]
Hyeon, Kim Yeong [1 ]
Bashar, Md Rezaul [2 ]
Rhee, Phill Kyu [1 ]
机构
[1] Inha Univ, Comp Engn Dept, 100 Inha Ro, Incheon 22212, South Korea
[2] Sci Technol & Management Crest, Sydney, NSW, Australia
来源
COGNITIVE SYSTEMS RESEARCH | 2018年 / 52卷
基金
新加坡国家研究基金会;
关键词
Expression recognition; Emotion classification; Face detection; Convolutional neural network; Active learning; FACE RECOGNITION;
D O I
10.1016/j.cogsys.2018.06.017
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Facial expression recognition in a wild situation is a challenging problem in computer vision research due to different circumstances, such as pose dissimilarity, age, lighting conditions, occlusions, etc. Numerous methods, such as point tracking, piecewise affine transformation, compact Euclidean space, modified local directional pattern, and dictionary-based component separation have been applied to solve this problem. In this paper, we have proposed a deep learning-based automatic wild facial expression recognition system where we have implemented an incremental active learning framework using the VGG16 model developed by the Visual Geometry Group. We have gathered a large amount of unlabeled facial expression data from Intelligent Technology Lab (ITLab) members at Inha University, Republic of Korea, to train our incremental active learning framework. We have collected these data under five different lighting conditions: good lighting, average lighting, close to the camera, far from the camera, and natural lighting and with seven facial expressions: happy, disgusted, sad, angry, surprised, fear, and neutral. Our facial recognition framework has been adapted from a multi-task cascaded convolutional network detector. Repeating the entire process helps obtain better performance. Our experimental results have demonstrated that incremental active learning improves the starting baseline accuracy from 63% to average 88% on ITLab dataset on wild environment. We also present extensive results on face expression benchmark such as Extended Cohn-Kanade Dataset, as well as ITLab face dataset captured in wild environment and obtained better performance than state-of-the-art approaches. (C) 2018 Published by Elsevier B.V.
引用
收藏
页码:212 / 222
页数:11
相关论文
共 39 条
  • [31] Rethinking the Inception Architecture for Computer Vision
    Szegedy, Christian
    Vanhoucke, Vincent
    Ioffe, Sergey
    Shlens, Jon
    Wojna, Zbigniew
    [J]. 2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, : 2818 - 2826
  • [32] Component-Based Recognition of Faces and Facial Expressions
    Taheri, Sima
    Patel, Vishal M.
    Chellappa, Rama
    [J]. IEEE TRANSACTIONS ON AFFECTIVE COMPUTING, 2013, 4 (04) : 360 - 371
  • [33] DeepFace: Closing the Gap to Human-Level Performance in Face Verification
    Taigman, Yaniv
    Yang, Ming
    Ranzato, Marc'Aurelio
    Wolf, Lior
    [J]. 2014 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2014, : 1701 - 1708
  • [34] A facial expression recognition system using robust face features from depth videos and deep learning
    Uddin, Md. Zia
    Hassan, Mohammed Mehedi
    Almogren, Ahmad
    Zuair, Mansour
    Fortino, Giancarlo
    Torresen, Jim
    [J]. COMPUTERS & ELECTRICAL ENGINEERING, 2017, 63 : 114 - 125
  • [35] Compressing Fisher Vector for Robust Face Recognition
    Wang, Hongjun
    Hu, Jiani
    Deng, Weihong
    [J]. IEEE ACCESS, 2017, 5 : 23157 - 23165
  • [36] Adaptive Feature Mapping for Customizing Deep Learning Based Facial Expression Recognition Model
    Wu, Bing-Fei
    Lin, Chun-Hsien
    [J]. IEEE ACCESS, 2018, 6 : 12451 - 12461
  • [37] Yu ZD, 2015, ICMI'15: PROCEEDINGS OF THE 2015 ACM INTERNATIONAL CONFERENCE ON MULTIMODAL INTERACTION, P435
  • [38] Zadrozny B., 2004, P 21 INT C MACH LEAR
  • [39] Improving Shadow Suppression for Illumination Robust Face Recognition
    Zhang, Wuming
    Zhao, Xi
    Morvan, Jean-Marie
    Chen, Liming
    [J]. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2019, 41 (03) : 611 - 624