Face frontalization based on generative adversarial network

被引:0
|
作者
Hu H.-Y. [1 ,2 ]
Gai S.-Y. [1 ,2 ,3 ]
Da F.-P. [1 ,2 ,3 ]
机构
[1] Key Laboratory of Measurement and Control of Complex System of Engineering, Ministry of Education, Southeast University, Nanjing
[2] School of Automation, Southeast University, Nanjing
[3] Shenzhen Institute of Southeast University, Shenzhen
关键词
Classifier; Face generation; Generative adversarial network (GAN); Mode collapse;
D O I
10.3785/j.issn.1008-973X.2021.01.014
中图分类号
学科分类号
摘要
The original deep convolutional generative adversarial network (DCGAN) structure was replaced with boundary equilibrium generative adversarial network (BEGAN) by drawing on the experience of two-pathway generative adversarial network (TP-GAN) in order to improve the effect of frontalizing the rotated faces. The classifier used for discriminating face identity was added to the traditional two player network in order to form the three-player structure. Adding classifier is more effective for maintaining the consistency of face identity compared with adding constraints in the generator loss function. The adopted BEGAN can improve the training efficiency aiming at the problem that TP-GAN are complex to train and prone to the mode collapse. The experimental results on the multiple images across pose, illumination and expression (Multi-PIE) and labeled faces in the wild (LFW) dataset show that the proposed method can generate high-quality frontal face images with retaining the identity information. Copyright ©2021 Journal of Zhejiang University (Engineering Science). All rights reserved.
引用
收藏
页码:116 / 123and152
相关论文
共 22 条
  • [1] SUN Y, CHEN Y, WANG X, Et al., Deep learning face representation by joint identification-verification, International Conference on Neural Information Processing Systems, (2014)
  • [2] SUN Y, WANG X G, TANG X O., Deep learning face representation from predicting 10, 000 classes, IEEE Conference on Computer Vision and Pattern Recognition, pp. 1891-1898, (2014)
  • [3] LI S, LIU X, CHAI X, Et al., Maximal likelihood correspondence estimation for face recognition across pose, IEEE Transactions on Image Processing, 23, 10, pp. 4587-4600, (2014)
  • [4] WISKOTT L, FELLOUS J M, NORBERT K, Et al., Face recognition by elastic bunch graph matching, IEEE Transactions on Pattern Analysis and Machine Intelligence, 19, 7, pp. 456-463, (1996)
  • [5] BISWASS S, AGGARWAL G, FLYNN P J, Et al., Pose-robust recognition of low-resolution face images, IEEE Transactions on Pattern Analysis and Machine Intelligence, 35, 12, pp. 3037-3049, (2013)
  • [6] CHEN D, CAO X, WEN F, Et al., Blessing of dimensionality: high-dimensional feature and its efficient compression for face verification, IEEE Conference on Computer Vision and Pattern Recognition, pp. 3025-3032, (2013)
  • [7] ZHANG Y, SHAO M, WONG E K, Et al., Random faces guided sparse many-to-one encoder for pose-invariant face recognition, Proceedings of the 2013 IEEE International Conference on Computer Vision, pp. 2416-2423, (2013)
  • [8] WU X, HE R, SUN Z, Et al., A light CNN for deep face representation with noisy labels, IEEE Transactions on Information Forensics and Security, 13, 11, pp. 2884-2896, (2018)
  • [9] SCHROFF F, KALENICHENKO D, PHILBIN J., Facenet: a unified embedding for face recognition and clustering, IEEE Conference on Computer Vision and Pattern Recognition, pp. 815-823, (2015)
  • [10] PRABHU U, HEO J, SAVVIDES M., Unconstrained pose-invariant face recognition using 3D generic elastic models, IEEE Transactions on Pattern Analysis and Machine Intelligence, 33, 10, pp. 1952-1961, (2011)