Learning to Generate Noise for Multi-Attack Robustness

被引:0
作者
Madaan, Divyam [1 ]
Shin, Jinwoo [2 ,3 ]
Hwang, Sung Ju [1 ,3 ,4 ]
机构
[1] Korea Adv Inst Sci & Technol, Sch Comp, Daejeon, South Korea
[2] Korea Adv Inst Sci & Technol, Sch Elect Engn, Daejeon, South Korea
[3] Korea Adv Inst Sci & Technol, Grad Sch AI, Daejeon, South Korea
[4] AITRICS, Seoul, South Korea
来源
INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 139 | 2021年 / 139卷
基金
新加坡国家研究基金会;
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Adversarial learning has emerged as one of the successful techniques to circumvent the susceptibility of existing methods against adversarial perturbations. However, the majority of existing defense methods are tailored to defend against a single category of adversarial perturbation (e.g. l(infinity)-attack). In safety-critical applications, this makes these methods extraneous as the attacker can adopt diverse adversaries to deceive the system. Moreover, training on multiple perturbations simultaneously significantly increases the computational overhead during training To address these challenges, we propose a novel meta-learning framework that explicitly learns to generate noise to improve the model's robustness against multiple types of attacks. Its key component is Meta Noise Generator (MNG) that outputs optimal noise to stochastically perturb a given sample, such that it helps lower the error on diverse adversarial perturbations. By utilizing samples generated by MNG, we train a model by enforcing the label consistency across multiple perturbations. We validate the robustness of models trained by our scheme on various datasets and against a wide variety of perturbations, demonstrating that it significantly outperforms the baselines across multiple perturbations with a marginal computational cost.
引用
收藏
页数:11
相关论文
共 58 条
[1]  
Amodei D, 2016, PR MACH LEARN RES, V48
[2]  
[Anonymous], 2017, P REL MACH LEARN WIL, DOI DOI 10.21105/JOSS.02607
[3]  
[Anonymous], 2020, ICLR
[4]  
Athalye A, 2018, PR MACH LEARN RES, V80
[5]  
Brendel W., 2018, INT C LEARN REPR, DOI DOI 10.48550/ARXIV.1712.04248
[6]  
BRENDEL W, 2019, NEURIPS
[7]  
Carlini C, 2017, IEEE PES INNOV SMART
[8]  
Carmon Y, 2019, 33 C NEURAL INFORM P, V32
[9]  
Chen PY, 2018, AAAI CONF ARTIF INTE, P10
[10]  
Cohen Jeremy, 2019, ICML