Crafting imperceptible and transferable adversarial examples: leveraging conditional residual generator and wavelet transforms to deceive deepfake detection

被引:0
作者
Li, Zhiyuan [1 ]
Jin, Xin [1 ]
Jiang, Qian [1 ]
Wang, Puming [1 ]
Lee, Shin-Jye [2 ]
Yao, Shaowen [1 ]
Zhou, Wei [1 ]
机构
[1] Yunnan Univ, Kunming, Yunnan, Peoples R China
[2] Natl Yang Ming Chiao Tung Univ, Hsinchu, Taiwan
基金
中国国家自然科学基金;
关键词
Deepfake detection; Adversarial examples; Imperceptible; Transferability; Black-box attacks;
D O I
10.1007/s00371-024-03605-x
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
The malicious abuse of deepfakes has raised serious ethical, security, and privacy concerns, eroding public trust in digital media. While existing deepfake detectors can detect fake images, they are vulnerable to adversarial attacks. Although various adversarial attacks have been explored, most are white-box attacks difficult to realize in practice, and the generated adversarial examples have poor quality easily noticeable to the human eye. For this detection task, the goal should be to generate adversarial examples that can deceive detectors while maintaining high quality and authenticity. We propose a method to generate imperceptible and transferable adversarial examples aimed at fooling unknown deepfake detectors. The method combines a conditional residual generator with an accessible detector as a surrogate model, utilizing the detector's relative distance loss function to generate highly transferable adversarial examples. Discrete wavelet transform is also introduced to enhance image quality. Extensive experiments demonstrate that the adversarial examples generated by our method not only possess excellent visual quality but also effectively deceive various detectors, exhibiting superior cross-detector transferability in black-box attacks. Our code is available at:https://github.com/SiSuiyuHang/ITA.
引用
收藏
页码:3329 / 3344
页数:16
相关论文
共 62 条
[21]   Densely Connected Convolutional Networks [J].
Huang, Gao ;
Liu, Zhuang ;
van der Maaten, Laurens ;
Weinberger, Kilian Q. .
30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, :2261-2269
[22]   Adversarial Deepfakes: Evaluating Vulnerability of Deepfake Detectors to Adversarial Examples [J].
Hussain, Shehzeen ;
Neekhara, Paarth ;
Jere, Malhar ;
Koushanfar, Fatinaz ;
McAuley, Julian .
2021 IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION WACV 2021, 2021, :3347-3356
[23]  
Ilyas Andrew, 2018, P MACHINE LEARNING R, V80
[24]  
Ivanovska M., 2023, ARXIV
[25]   Exploring Frequency Adversarial Attacks for Face Forgery Detection [J].
Jia, Shuai ;
Ma, Chao ;
Yao, Taiping ;
Yin, Bangjie ;
Ding, Shouhong ;
Yang, Xiaokang .
2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022), 2022, :4093-4102
[26]   PhotoHelper: Portrait Photographing Guidance Via Deep Feature Retrieval and Fusion [J].
Jiang, Nan ;
Sheng, Bin ;
Li, Ping ;
Lee, Tong-Yee .
IEEE TRANSACTIONS ON MULTIMEDIA, 2023, 25 :2226-2238
[27]   A Style-Based Generator Architecture for Generative Adversarial Networks [J].
Karras, Tero ;
Laine, Samuli ;
Aila, Timo .
2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, :4396-4405
[28]  
Kim H., 2020, ARXIV201001950
[29]  
Korshunov P., 2018, DeepFakes: A new threat to face recognition? Assessment and detection
[30]  
Kurakin A., 2017, ARTIFICIAL INTELLIGE, P1, DOI DOI 10.1201/9781351251389-8