Adversarial Training of Deep Neural Networks Guided by Texture and Structural Information

被引:2
作者
Wang, Zhaoxin [1 ]
Wang, Handing [1 ]
Tian, Cong [1 ]
Jin, Yaochu [2 ]
机构
[1] Xidian Univ, Xian, Shaanxi, Peoples R China
[2] Bielefeld Univ, Bielefeld, Germany
来源
PROCEEDINGS OF THE 31ST ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2023 | 2023年
基金
中国国家自然科学基金;
关键词
Deep neural networks; adversarial training; structure and texture information;
D O I
10.1145/3581783.3612163
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Adversarial training (AT) is one of the most effective ways for deep neural network models to resist adversarial examples. However, there is still a significant gap between robust training accuracy and testing accuracy. Although recent studies have shown that data augmentation can effectively reduce this gap, most methods heavily rely on generating large amounts of training data without considering which features are beneficial for model robustness, making them inefficient. To address the above issue, we propose a two-stage AT algorithm for image data that adopts different data augmentation strategies during the training process to improve model robustness. In the first stage, we focus on the convergence of the algorithm, which uses structure and texture information to guide AT. In the second stage, we introduce a strategy that randomly fuses the data features to generate diverse adversarial examples for AT. We compare our proposed algorithm with five state-of-the-art algorithms on three models, and the experimental results achieve the best robust accuracy under all evaluation metrics on the CIFAR10 dataset, demonstrating the superiority of our method.
引用
收藏
页码:4958 / 4967
页数:10
相关论文
共 50 条
[1]  
Adachi Hiroki, 2023, MASKING MIXING ADVER
[2]  
[Anonymous], 2017, ARXIV170502900
[3]  
[Anonymous], 2018, Mixed Precision Training
[4]  
Athalye A., 2018, INT C MACHINE LEARNI
[5]  
Bai Tao, 2021, IJCAI
[7]   Towards Evaluating the Robustness of Neural Networks [J].
Carlini, Nicholas ;
Wagner, David .
2017 IEEE SYMPOSIUM ON SECURITY AND PRIVACY (SP), 2017, :39-57
[8]  
Croce F, 2020, PR MACH LEARN RES, V119
[9]  
Deng J, 2009, PROC CVPR IEEE, P248, DOI 10.1109/CVPRW.2009.5206848
[10]   LiBRe: A Practical Bayesian Approach to Adversarial Detection [J].
Deng, Zhijie ;
Yang, Xiao ;
Xu, Shizhen ;
Su, Hang ;
Zhu, Jun .
2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, :972-982