Cross-Domain Facial Expression Recognition by Combining Transfer Learning and Face-Cycle Generative Adversarial Network

被引:0
作者
Zhou, Yu [1 ]
Yang, Ben [2 ]
Liu, Zhenni [1 ]
Wang, Qian [1 ]
Xiong, Ping [1 ]
机构
[1] Zhongnan Univ Econ & Law, Sch Informat Engn, Wuhan 430073, Peoples R China
[2] Xi An Jiao Tong Univ, Inst Artificial Intelligence & Robot, Xian 710049, Peoples R China
关键词
Facial expression recognition; Transfer learning; Generative Adversarial Network; PATTERNS; MODEL;
D O I
10.1007/s11042-024-18713-y
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Facial expression recognition (FER) is one of the important research topics in computer vision. It is difficult to obtain high accuracy in FER tasks, especially when the high-quality labeled data are insufficient. Indeed, the facial images with non-frontal faces, occlusions and inaccurate labels heavily affects the training results of FER network models, which causes low recognition accuracy and poor robustness. To this end, we propose a novel strategy for FER tasks through combining transfer learning and generative adversarial network (GAN). First, we enlarge the training datasets by introducing an effective face-cycle GAN to synthesize additional facial expression images. Then, we develop two FER neural networks based on two representative convolutional neural networks (CNN). By transferring the cross-domain knowledge from the two well-trained CNNs to the proposed FER CNNs, it not only obtains more pre-trained knowledge and also accelerates the training process greatly. The experimental results show that the proposed FER CNNs integrated with the new face-cycle GAN achieves high accuracies 98.44%, 95.24% and 91.67% on three widely used datasets CK + , JAFFE, and Oulu-CASIA, respectively. Compared to the results obtained by other state-of-the-art FER methods, the accuracies are improved by 0.34%, 0.24%, and 2.62%, respectively.
引用
收藏
页码:90289 / 90314
页数:26
相关论文
共 50 条
  • [31] Multi-Pose Face Recognition with Two-Cycle Generative Adversarial Network
    Zhijing, Xu
    Dong, Wang
    ACTA OPTICA SINICA, 2020, 40 (19)
  • [32] MDTGAN: Multi domain generative adversarial transfer learning network for traffic data imputation
    Fang, Jie
    He, Hangyu
    Xu, Mengyun
    Chen, Hongting
    EXPERT SYSTEMS WITH APPLICATIONS, 2024, 255
  • [33] A Novel Multiview Predictive Local Adversarial Network for Partial Transfer Learning in Cross-Domain Fault Diagnostics
    Tan, Shuai
    Wang, Kailiang
    Shi, Hongbo
    Song, Bing
    IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, 2023, 72
  • [34] Cross-domain Facial Expression Recognition Using Supervised Kernel Mean Matching
    Miao, Yun-Qian
    Araujo, Rodrigo
    Kamel, Mohamed S.
    2012 11TH INTERNATIONAL CONFERENCE ON MACHINE LEARNING AND APPLICATIONS (ICMLA 2012), VOL 2, 2012, : 326 - 332
  • [35] Age Factor Removal Network Based on Transfer Learning and Adversarial Learning for Cross-Age Face Recognition
    Du, Lingshuang
    Hu, Haifeng
    Wu, Yongbo
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2020, 30 (09) : 2830 - 2842
  • [36] Transfer subspace learning for cross-dataset facial expression recognition
    Yan, Haibin
    NEUROCOMPUTING, 2016, 208 : 165 - 173
  • [37] Facial Expression Recognition Using Transfer Learning on Deep Convolutional Network
    Hablani, Ramchand
    BIOSCIENCE BIOTECHNOLOGY RESEARCH COMMUNICATIONS, 2020, 13 (14): : 185 - 188
  • [38] SAR Target Recognition Based on Cross-Domain and Cross-Task Transfer Learning
    Wang, Ke
    Zhang, Gong
    Leung, Henry
    IEEE ACCESS, 2019, 7 : 153391 - 153399
  • [39] Emotion-Preserving Representation Learning via Generative Adversarial Network for Multi-view Facial Expression Recognition
    Lai, Ying-Hsiu
    Lai, Shang-Hong
    PROCEEDINGS 2018 13TH IEEE INTERNATIONAL CONFERENCE ON AUTOMATIC FACE & GESTURE RECOGNITION (FG 2018), 2018, : 263 - 270
  • [40] Large-pose facial makeup transfer based on generative adversarial network combined face alignment and face parsing
    Li, Qiming
    Tu, Tongyue
    MATHEMATICAL BIOSCIENCES AND ENGINEERING, 2023, 20 (01) : 737 - 757