Adversarial Attacks Against Deep Generative Models on Data: A Survey

被引:23
作者
Sun, Hui [1 ]
Zhu, Tianqing [1 ]
Zhang, Zhiqiu [1 ]
Jin, Dawei [2 ]
Xiong, Ping
Zhou, Wanlei [3 ]
机构
[1] China Univ Geosci, Wuhan 430074, Hubei, Peoples R China
[2] Zhongnan Univ Econ & Law, Wuhan 430073, Hubei, Peoples R China
[3] City Univ Macau, Macau, Peoples R China
基金
中国国家自然科学基金;
关键词
Training; Generators; Data models; Codes; Biological system modeling; Security; Privacy; Deep generative models; deep learning; membership inference attack; evasion attack; model defense; NETWORKS;
D O I
10.1109/TKDE.2021.3130903
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Deep generative models have gained much attention given their ability to generate data for applications as varied as healthcare to financial technology to surveillance, and many more - the most popular models being generative adversarial networks (GANs) and variational auto-encoders (VAEs). Yet, as with all machine learning models, ever is the concern over security breaches and privacy leaks and deep generative models are no exception. In fact, these models have advanced so rapidly in recent years that work on their security is still in its infancy. In an attempt to audit the current and future threats against these models, and to provide a roadmap for defense preparations in the short term, we prepared this comprehensive and specialized survey on the security and privacy preservation of GANs and VAEs. Our focus is on the inner connection between attacks and model architectures and, more specifically, on five components of deep generative models: the training data, the latent code, the generators/decoders of GANs/VAEs, the discriminators/encoders of GANs/VAEs, and the generated data. For each model, component and attack, we review the current research progress and identify the key challenges. The paper concludes with a discussion of possible future attacks and research directions in the field.
引用
收藏
页码:3367 / 3388
页数:22
相关论文
共 164 条
[1]  
Alfeld S, 2016, AAAI CONF ARTIF INTE, P1452
[2]  
Alzantot M, 2018, 2018 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP 2018), P2890
[3]  
[Anonymous], 2012, Poisoning Attacks against Support Vector Machines, DOI 10.48550/arxiv.1206.6389
[4]  
[Anonymous], 2015, P 22 ACM SIGSAC C CO, DOI DOI 10.1145/2810103.2813677
[5]  
[Anonymous], 1964, Comput. J., V7, P155
[6]  
[Anonymous], 2019, ARXIV190603595V1
[7]  
[Anonymous], 2021, P IEEE CVF C COMP VI, DOI DOI 10.1109/TPAMI.2020.2970919
[8]  
[Anonymous], 2014, 27THINT C NEURAL INF
[9]  
Arjovsky M, 2017, PR MACH LEARN RES, V70
[10]  
Augenstein S., 2020, P INT C LEARN REPR