Evading Deepfake-Image Detectors with White- and Black-Box Attacks

被引:99
作者
Carlini, Nicholas [1 ]
Farid, Hany [2 ]
机构
[1] Google Brain, Mountain View, CA 94043 USA
[2] Univ Calif Berkeley, Berkeley, CA 94720 USA
来源
2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS (CVPRW 2020) | 2020年
关键词
D O I
10.1109/CVPRW50498.2020.00337
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
It is now possible to synthesize highly realistic images of people who do not exist. Such content has, for example, been implicated in the creation of fraudulent social-media profiles responsible for dis-information campaigns. Significant efforts are, therefore, being deployed to detect synthetically-generated content. One popular forensic approach trains a neural network to distinguish real from synthetic content. We show that such forensic classifiers are vulnerable to a range of attacks that reduce the classifier to near-0% accuracy. We develop five attack case studies on a state-of-the-art classifier that achieves an area under the ROC curve (AUC) of 0.95 on almost all existing image generators, when only trained on one generator. With full access to the classifier, we can flip the lowest bit of each pixel in an image to reduce the classifier's AUC to 0.0005; perturb 1% of the image area to reduce the classifier's AUC to 0.08; or add a single noise pattern in the synthesizer's latent space to reduce the classifier's AUC to 0.17. We also develop a black-box attack that, with no access to the target classifier, reduces the AUC to 0.22. These attacks reveal significant vulnerabilities of certain image-forensic classifiers.
引用
收藏
页码:2804 / 2813
页数:10
相关论文
共 46 条
[1]  
Agarwal S., 2019, P IEEE C COMPUTER VI, P38, DOI [10.4108/eai.18-7-2019, DOI 10.4108/EAI.18-7-2019]
[2]  
Anderson Ross, 2008, Assignation, P6
[3]  
[Anonymous], 2013, Digital Image Forensics, DOI [DOI 10.1007/978-1-4614-0757-712, 10.1007/978-1-4614-0757-7, DOI 10.1007/978-1-4614-0757-7_12]
[4]  
[Anonymous], 2018, Motivating the rules of the game for adversarial example research
[5]  
[Anonymous], 2018, INT C LEARN REPR
[6]  
Athalye A, 2018, PR MACH LEARN RES, V80
[7]  
Braverman Mara, 2017, Revista de Investigacion y Desarrollo Pesquero, V31, P5
[8]   FlyMap: Interacting with Maps Projected from a Drone [J].
Brock, Anke M. ;
Chatain, Julia ;
Park, Michelle ;
Fang, Tommy ;
Hachet, Martin ;
Landay, James A. ;
Cauchard, Jessica R. .
PROCEEDINGS PERVASIVE DISPLAYS 2018: THE 7TH ACM INTERNATIONAL SYMPOSIUM ON PERVASIVE DISPLAYS, 2018,
[9]  
Carlini N., 2017, P AISEC, P3
[10]   Towards Evaluating the Robustness of Neural Networks [J].
Carlini, Nicholas ;
Wagner, David .
2017 IEEE SYMPOSIUM ON SECURITY AND PRIVACY (SP), 2017, :39-57