A Robust GAN-Generated Face Detection Method Based on Dual-Color Spaces and an Improved Xception

被引:74
作者
Chen, Beijing [1 ,2 ,3 ]
Liu, Xin [1 ,2 ,3 ]
Zheng, Yuhui [1 ,2 ,3 ]
Zhao, Guoying [4 ]
Shi, Yun-Qing [5 ]
机构
[1] Nanjing Univ Informat Sci & Technol, Engn Res Ctr Digital Forens, Minist Educ, Nanjing 210044, Peoples R China
[2] Nanjing Univ Informat Sci & Technol, Sch Comp, Nanjing 210044, Peoples R China
[3] Nanjing Univ Informat Sci & Technol, Jiangsu Collaborat Innovat Ctr Atmospher Environm, Nanjing 210044, Peoples R China
[4] Univ Oulu, Ctr Machine Vis & Signal Anal, Oulu 90014, Finland
[5] New Jersey Inst Technol, Dept Elect & Comp Engn, Newark, NJ 07102 USA
基金
中国国家自然科学基金;
关键词
Faces; Feature extraction; Image color analysis; Convolution; Face detection; Robustness; Convolutional neural networks; Generated face; generative adversarial network; Xception; color space; NETWORKS;
D O I
10.1109/TCSVT.2021.3116679
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
In recent years, generative adversarial networks (GANs) have been widely used to generate realistic fake face images, which can easily deceive human beings. To detect these images, some methods have been proposed. However, their detection performance will be degraded greatly when the testing samples are post-processed. In this paper, some experimental studies on detecting post-processed GAN-generated face images find that (a) both the luminance component and chrominance components play an important role, and (b) the RGB and YCbCr color spaces achieve better performance than the HSV and Lab color spaces. Therefore, to enhance the robustness, both the luminance component and chrominance components of dual-color spaces (RGB and YCbCr) are considered to utilize color information effectively. In addition, the convolutional block attention module and multilayer feature aggregation module are introduced into the Xception model to enhance its feature representation power and aggregate multilayer features, respectively. Finally, a robust dual-stream network is designed by integrating dual-color spaces RGB and YCbCr and using an improved Xception model. Experimental results demonstrate that our method outperforms some existing methods, especially in its robustness against different types of post-processing operations, such as JPEG compression, Gaussian blurring, gamma correction, and median filtering.
引用
收藏
页码:3527 / 3538
页数:12
相关论文
共 54 条
[1]  
[Anonymous], INT C LEARNING REPRE
[2]  
[Anonymous], 2021, IEEE Trans. Broadcast.
[3]  
Arjovsky M, 2017, PR MACH LEARN RES, V70
[4]  
Bartlett M. S., 2003, 2003 C COMP VIS PATT, P53, DOI DOI 10.1109/CVPRW.2003.10057
[5]  
Berthelot D., 2017, BEGAN: boundary equilibrium generative adversarial networks
[6]   Distinguishing Between Natural and GAN-Generated Face Images by Combining Global and Local Features [J].
Chen Beijing ;
Tan Weijin ;
Wang Yiting ;
Zhao Guoying .
CHINESE JOURNAL OF ELECTRONICS, 2022, 31 (01) :59-67
[7]   Locally GAN-generated face detection based on an improved Xception [J].
Chen, Beijing ;
Ju, Xingwang ;
Xiao, Bin ;
Ding, Weiping ;
Zheng, Yuhui ;
de Albuquerque, Victor Hugo C. .
INFORMATION SCIENCES, 2021, 572 :16-28
[8]   A Serial Image Copy-Move Forgery Localization Scheme With Source/Target Distinguishment [J].
Chen, Beijing ;
Tan, Weijin ;
Coatrieux, Gouenou ;
Zheng, Yuhui ;
Shi, Yun-Qing .
IEEE TRANSACTIONS ON MULTIMEDIA, 2021, 23 :3506-3517
[9]   StarGAN: Unified Generative Adversarial Networks for Multi-Domain Image-to-Image Translation [J].
Choi, Yunjey ;
Choi, Minje ;
Kim, Munyoung ;
Ha, Jung-Woo ;
Kim, Sunghun ;
Choo, Jaegul .
2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, :8789-8797
[10]   Xception: Deep Learning with Depthwise Separable Convolutions [J].
Chollet, Francois .
30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, :1800-1807