Generative Transferable Universal Adversarial Perturbation for Combating Deepfakes

被引:0
作者
Guo, Yuchen [1 ,2 ]
Wang, Xi [1 ]
Fu, Xiaomeng [1 ,2 ]
Li, Jin [1 ,2 ]
Li, Zhaoxing [1 ]
Chai, Yesheng [1 ]
Hao, Jizhong [1 ]
机构
[1] Chinese Acad Sci, Inst Informat Engn, Beijing, Peoples R China
[2] Univ Chinese Acad Sci, Sch Cyber Secur, Beijing, Peoples R China
来源
PROCEEDINGS OF THE 2024 27 TH INTERNATIONAL CONFERENCE ON COMPUTER SUPPORTED COOPERATIVE WORK IN DESIGN, CSCWD 2024 | 2024年
关键词
adversarial perturbation; deepfake; face modification; face protection;
D O I
10.1109/CSCWD61410.2024.10580713
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
Recently, Deepfake has posed a significant threat to our digital society. This technology allows for the modification of facial identity, expression, and attributes in facial images and videos. The misuse of Deepfake can invade personal privacy, damage individuals' reputations, and have serious consequences. To counter this threat, researchers have proposed active defense methods using adversarial perturbation to distort Deepfake products which can hinder the dissemination of false information. However, the existing methods are primarily based on image-specific approaches, which are inefficient for large-scale data. To address these issues, we propose an end-to-end approach to generate universal perturbations for combating Deepfake. To further cope with diverse Deepfakes, we introduce an adaptive balancing strategy to combat multiple models simultaneously. Specifically, for different scenarios, we propose two types of universal perturbations. Disrupting Universal Perturbation (DUP) leads Deepfake models to generate distorted outputs. In contrast, Lapsing Universal Perturbation (LUP) tries to make the output consistent with the original image, allowing the correct information to continue propagating. Experiments demonstrate the effectiveness and better generalization of our proposed perturbation compared with state-of-the-art methods. Consequently, our proposed method offers a powerful and efficient solution for combating Deepfake, which can help preserve personal privacy and prevent reputational damage.
引用
收藏
页码:1980 / 1985
页数:6
相关论文
共 28 条
[1]   TAFIM: Targeted Adversarial Attacks Against Facial Image Manipulations [J].
Aneja, Shivangi ;
Markhasin, Lev ;
Niessner, Matthias .
COMPUTER VISION - ECCV 2022, PT XIV, 2022, 13674 :58-75
[2]  
Choi Y, 2020, PROC CVPR IEEE, P8185, DOI 10.1109/CVPR42600.2020.00821
[3]   StarGAN: Unified Generative Adversarial Networks for Multi-Domain Image-to-Image Translation [J].
Choi, Yunjey ;
Choi, Minje ;
Kim, Munyoung ;
Ha, Jung-Woo ;
Kim, Sunghun ;
Choo, Jaegul .
2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, :8789-8797
[4]   Boosting Adversarial Attacks with Momentum [J].
Dong, Yinpeng ;
Liao, Fangzhou ;
Pang, Tianyu ;
Su, Hang ;
Zhu, Jun ;
Hu, Xiaolin ;
Li, Jianguo .
2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, :9185-9193
[5]  
Frank J., 2020, PMLR
[6]   DeepFake Detection by Analyzing Convolutional Traces [J].
Guarnera, Luca ;
Giudice, Oliver ;
Battiato, Sebastiano .
2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS (CVPRW 2020), 2020, :2841-2850
[7]   AttGAN: Facial Attribute Editing by Only Changing What You Want [J].
He, Zhenliang ;
Zuo, Wangmeng ;
Kan, Meina ;
Shan, Shiguang ;
Chen, Xilin .
IEEE TRANSACTIONS ON IMAGE PROCESSING, 2019, 28 (11) :5464-5478
[8]  
Huang G.B., 2008, WORKSHOP FACES INREA
[9]  
Huang H, 2022, AAAI CONF ARTIF INTE, P989
[10]   A Style-Based Generator Architecture for Generative Adversarial Networks [J].
Karras, Tero ;
Laine, Samuli ;
Aila, Timo .
2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, :4396-4405