A3D: A Platform of Searching for Robust Neural Architectures and Efficient Adversarial Attacks

被引:0
作者
Sun, Jialiang [1 ]
Yao, Wen [1 ]
Jiang, Tingsong [1 ]
Li, Chao [1 ,2 ]
Chen, Xiaoqian [1 ]
机构
[1] Chinese Acad Mil Sci, Def Innovat Inst, Beijing 100071, Peoples R China
[2] Xidian Univ, Sch Artificial Intelligence, Xian 710071, Peoples R China
基金
中国国家自然科学基金;
关键词
Robustness; Search problems; Automated machine learning; Optimization; Noise; Jacobian matrices; Perturbation methods; Mathematical models; Computer architecture; Hands; Auto machine learning; adversarial defense; adversarial attack; evolutionary algorithm; CLASSIFICATION;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Due to the urgent need of the robustness of deep neural networks (DNN), numerous existing open-sourced tools or platforms are developed to evaluate the robustness of DNN models by ensembling the majority of adversarial attack or defense algorithms. Unfortunately, current platforms can neither optimize the DNN architectures nor the configuration of adversarial attacks to further enhance the model robustness or the performance of adversarial attacks. To alleviate these problems, in this paper, we propose a novel platform called auto-adversarial attack and defense (A(3)D), which can help search for robust neural network architectures and efficient adversarial attacks. A(3)D integrates multiple neural architecture search methods to find robust architectures under different robustness evaluation metrics. Besides, we provide multiple optimization algorithms to search for efficient adversarial attacks. In addition, we combine auto-adversarial attack and defense together to form a unified framework. Among auto adversarial defense, the searched efficient attack can be used as the new robustness evaluation to further enhance the robustness. In auto-adversarial attack, the searched robust architectures can be utilized as the threat model to help find stronger adversarial attacks. Experiments on CIFAR10, CIFAR100, and ImageNet datasets demonstrate the feasibility and effectiveness of the proposed platform.
引用
收藏
页码:3975 / 3991
页数:17
相关论文
共 54 条
[1]  
Addepalli S., 2021, Towards achieving adversarial robustness beyond perceptual limits
[2]  
Awad N, 2021, Arxiv, DOI arXiv:2012.06400
[3]  
Bernhard R., 2020, P ACT C CAID 2020
[4]   Towards Evaluating the Robustness of Neural Networks [J].
Carlini, Nicholas ;
Wagner, David .
2017 IEEE SYMPOSIUM ON SECURITY AND PRIVACY (SP), 2017, :39-57
[5]   Remote Sensing Scene Classification via Multi-Branch Local Attention Network [J].
Chen, Si-Bao ;
Wei, Qing-Song ;
Wang, Wen-Zhong ;
Tang, Jin ;
Luo, Bin ;
Wang, Zu-Yuan .
IEEE TRANSACTIONS ON IMAGE PROCESSING, 2022, 31 :99-109
[6]  
Chen XN, 2020, PR MACH LEARN RES, V119
[7]   Fair DARTS: Eliminating Unfair Advantages in Differentiable Architecture Search [J].
Chu, Xiangxiang ;
Zhou, Tianbao ;
Zhang, Bo ;
Li, Jixiang .
COMPUTER VISION - ECCV 2020, PT XV, 2020, 12360 :465-480
[8]  
Croce F, 2020, PR MACH LEARN RES, V119
[9]   Learnable Boundary Guided Adversarial Training [J].
Cui, Jiequan ;
Liu, Shu ;
Wang, Liwei ;
Jia, Jiaya .
2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, :15701-15710
[10]   Benchmarking Adversarial Robustness on Image Classification [J].
Dong, Yinpeng ;
Fu, Qi-An ;
Yang, Xiao ;
Pang, Tianyu ;
Su, Hang ;
Xiao, Zihao ;
Zhu, Jun .
2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2020, :318-328