A discriminatively deep fusion approach with improved conditional GAN (im-cGAN) for facial expression recognition

被引:43
作者
Sun, Zhe [1 ]
Zhang, Hehao [1 ]
Bai, Jiatong [1 ]
Liu, Mingyang [1 ]
Hu, Zhengping [1 ]
机构
[1] Yanshan Univ, Dept Informat Sci & Engn, Qinhuangdao 066000, Hebei, Peoples R China
基金
中国国家自然科学基金;
关键词
Facial expression recognition; Discriminatively deep fusion approach; Improved conditional generative adversarial; network; Discriminative loss function; NETWORKS;
D O I
10.1016/j.patcog.2022.109157
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Considering most deep learning-based methods heavily depend on huge labels, it is still a challenging issue for facial expression recognition to extract discriminative features of training samples with limited labels. Given above, we propose a discriminatively deep fusion (DDF) approach based on an improved conditional generative adversarial network (im-cGAN) to learn abstract representation of facial expres-sions. First, we employ facial images with action units (AUs) to train the im-cGAN to generate more labeled expression samples. Subsequently, we utilize global features learned by the global-based module and the local features learned by the region-based module to obtain the fused feature representation. Finally, we design the discriminative loss function (D-loss) that expands the inter-class variations while minimizing the intra-class distances to enhance the discrimination of fused features. Experimental results on JAFFE, CK + , Oulu-CASIA, and KDEF datasets demonstrate the proposed approach is superior to some state-of-the-art methods.(c) 2022 Elsevier Ltd. All rights reserved.
引用
收藏
页数:11
相关论文
共 40 条
[1]  
[Anonymous], 1978, FACIAL ACTION CODING
[2]   Island Loss for Learning Discriminative Features in Facial Expression Recognition [J].
Cai, Jie ;
Meng, Zibo ;
Khan, Ahmed Shehab ;
Li, Zhiyuan ;
O'Reilly, James ;
Tong, Yan .
PROCEEDINGS 2018 13TH IEEE INTERNATIONAL CONFERENCE ON AUTOMATIC FACE & GESTURE RECOGNITION (FG 2018), 2018, :302-309
[3]   StarGAN: Unified Generative Adversarial Networks for Multi-Domain Image-to-Image Translation [J].
Choi, Yunjey ;
Choi, Minje ;
Kim, Munyoung ;
Ha, Jung-Woo ;
Kim, Sunghun ;
Choo, Jaegul .
2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, :8789-8797
[4]   Deep convolution network based emotion analysis towards mental health care [J].
Fei, Zixiang ;
Yang, Erfu ;
Li, David Day-Uei ;
Butler, Stephen ;
Ijomah, Winifred ;
Li, Xia ;
Zhou, Huiyu .
NEUROCOMPUTING, 2020, 388 (212-227) :212-227
[5]   Facial expression recognition boosted by soft label with a diverse ensemble [J].
Gan, Yanling ;
Chen, Jingying ;
Xu, Luhui .
PATTERN RECOGNITION LETTERS, 2019, 125 :105-112
[6]   AttGAN: Facial Attribute Editing by Only Changing What You Want [J].
He, Zhenliang ;
Zuo, Wangmeng ;
Kan, Meina ;
Shan, Shiguang ;
Chen, Xilin .
IEEE TRANSACTIONS ON IMAGE PROCESSING, 2019, 28 (11) :5464-5478
[7]  
Hore Alain, 2010, Proceedings of the 2010 20th International Conference on Pattern Recognition (ICPR 2010), P2366, DOI 10.1109/ICPR.2010.579
[8]  
Gulrajani I, 2017, ADV NEUR IN, V30
[9]   MiniExpNet: A small and effective facial expression recognition network based on facial local regions [J].
Jin, Xing ;
Jin, Zhong .
NEUROCOMPUTING, 2021, 462 :353-364
[10]   Analyzing and Improving the Image Quality of StyleGAN [J].
Karras, Tero ;
Laine, Samuli ;
Aittala, Miika ;
Hellsten, Janne ;
Lehtinen, Jaakko ;
Aila, Timo .
2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2020), 2020, :8107-8116