An adaptive weighted fusion model with two subspaces for facial expression recognition

被引:0
作者
Zhe Sun
Zheng-ping Hu
Raymond Chiong
Meng Wang
Shuhuan Zhao
机构
[1] Yanshan University,School of Information Science and Engineering
[2] Taishan University,School of Physics and Electronic Engineering
[3] The University of Newcastle,School of Electrical Engineering and Computing
[4] Hebei University,School of Information Science and Engineering
来源
Signal, Image and Video Processing | 2018年 / 12卷
关键词
Facial expression recognition; Adaptive weighted fusion model; Unsupervised subspace; Supervised subspace;
D O I
暂无
中图分类号
学科分类号
摘要
Automatic facial expression recognition has received considerable attention in the research areas of computer vision and pattern recognition. To achieve satisfactory accuracy, deriving a robust facial expression representation is especially important. In this paper, we present an adaptive weighted fusion model (AWFM), aiming to automatically determine optimal weighted values. The AWFM integrates two subspaces, i.e., unsupervised and supervised subspaces, to represent and classify query samples. The unsupervised subspace is formed by differentiated expression samples generated via an auxiliary neutral training set. The supervised subspace is obtained through the reconstruction of intra-class singular value decomposition based on low-rank decomposition from raw training data. Our experiments using three public facial expression datasets confirm that the proposed model can obtain better performance compared to conventional fusion methods as well as state-of-the-art methods from the literature.
引用
收藏
页码:835 / 843
页数:8
相关论文
共 92 条
  • [1] Lee SH(2016)Collaborative expression representation using peak expression and intra class variation face images for practical subject-independent emotion recognition in videos Pattern Recognit. 54 52-67
  • [2] Baddar WJ(2016)Evolving an emotion recognition module for an intelligent agent using genetic programming and a genetic algorithm Artif. Life Robot. 21 85-90
  • [3] Ro YM(2015)Sparse representation theory and its application for face recognition Int. J. Smart Sens. Intell. Syst. 8 107-124
  • [4] Yusuf R(2015)Multi-instance feature learning based on sparse representation for facial expression recognition Lect. Notes Comput. Sci. 8935 224-233
  • [5] Sharma DG(2017)Adaptive feature selection based on reconstruction residual and accurately located landmarks for expression-robust 3D face recognition Signal Image Video Process. 11 1305-1312
  • [6] Tanev I(2014)Facial expression recognition based on two-stage feature extraction Optik 125 6678-6680
  • [7] Shimohara K(2015)An efficient multimodal 2D+3D feature-based approach to automatic facial expression Comput. Vis. Image Underst. 140 83-92
  • [8] Wang Y(2015)Fusion of visible and thermal descriptors using genetic algorithms for face recognition systems Sensors 15 17944-17962
  • [9] Wang C(2015)Study on MACE Gabor filters, Gabor wavelets, DCT-neural network, hybrid spatial feature interdependence matrix, fusion techniques for face recognition Recent Pat. Eng. 9 29-36
  • [10] Liang L(2011)Classifier fusion based on inner-cluster class distribution Appl. Mech. Mater. 44–47 3220-3224