Wavelet regularization benefits adversarial training

被引:4
作者
Yan, Jun [1 ]
Yin, Huilin [1 ]
Zhao, Ziming [1 ]
Ge, Wancheng [1 ]
Zhang, Hao [1 ]
Rigoll, Gerhard [2 ]
机构
[1] Tongji Univ, Coll Elect & Informat Engn, 4800 Caoan Gonglu Rd, Shanghai 201804, Peoples R China
[2] Tech Univ Munich, Inst Human Machine Commun, 21 Arcisstr, D-80333 Munich, Germany
基金
中国国家自然科学基金;
关键词
Deep learning; Robustness; Adversarial training; Wavelet transform; Lipschitz constraint; ROBUSTNESS;
D O I
10.1016/j.ins.2023.119650
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Adversarial training methods are frequently-used empirical defense methods against adversarial examples. While many regularization techniques demonstrate effectiveness when combined with adversarial training, these methods typically work in the time domain. However, as the adversarial vulnerability can be considered a high-frequency phenomenon, it is crucial to regulate adversarially-trained neural network models in the frequency domain to capture low-frequency and high-frequency features. Neural networks must fully utilize the detailed local features extracted by their receptive field. To address these challenges, we conduct a theoretical analysis of the regularization properties of wavelets, which can enhance adversarial training. We propose a wavelet regularization method based on the Haar wavelet decomposition named Wavelet Average Pooling. This wavelet regularization module is integrated into a wide residual neural network to form a new model called WideWaveletResNet. On the CIFAR-10 and CIFAR-100 datasets, our proposed Adversarial Wavelet Training method demonstrates considerable robustness against different types of attacks. It confirms our assumption that our wavelet regularization method can enhance adversarial robustness, particularly in deep and wide neural networks. We present a detailed comparison of different wavelet base functions and conduct visualization experiments of the Frequency Principle (F-Principle) and interpretability to demonstrate the effectiveness of our method. The code is available on the open-source website: https://github .com /momo1986 / AdversarialWaveletTraining.
引用
收藏
页数:20
相关论文
共 50 条
[21]   Defense Against Adversarial Attacks Using Topology Aligning Adversarial Training [J].
Kuang, Huafeng ;
Liu, Hong ;
Lin, Xianming ;
Ji, Rongrong .
IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2024, 19 :3659-3673
[22]   Revisiting and Exploring Efficient Fast Adversarial Training via LAW: Lipschitz Regularization and Auto Weight Averaging [J].
Jia, Xiaojun ;
Chen, Yuefeng ;
Mao, Xiaofeng ;
Duan, Ranjie ;
Gu, Jindong ;
Zhang, Rong ;
Xue, Hui ;
Liu, Yang ;
Cao, Xiaochun .
IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2024, 19 :8125-8139
[23]   Adversarial Training With Anti-Adversaries [J].
Zhou, Xiaoling ;
Wu, Ou ;
Yang, Nan .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2024, 46 (12) :10210-10227
[24]   Adversarial Training Methods for Network Embedding [J].
Dai, Quanyu ;
Shen, Xiao ;
Zhang, Liang ;
Li, Qiang ;
Wang, Dan .
WEB CONFERENCE 2019: PROCEEDINGS OF THE WORLD WIDE WEB CONFERENCE (WWW 2019), 2019, :329-339
[25]   ADVERSARIAL TRAINING WITH STOCHASTIC WEIGHT AVERAGE [J].
Hwang, Joong-won ;
Lee, Youngwan ;
Oh, Sungchan ;
Bae, Yuseok .
2021 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2021, :814-818
[26]   On the limitations of adversarial training for robust image classification with convolutional neural networks [J].
Carletti, Mattia ;
Sinigaglia, Erto ;
Terzi, Matteo ;
Susto, Gian Antonio .
INFORMATION SCIENCES, 2024, 675
[27]   Improving the robustness and accuracy of biomedical language models through adversarial training [J].
Moradi, Milad ;
Samwald, Matthias .
JOURNAL OF BIOMEDICAL INFORMATICS, 2022, 132
[28]   A New Perspective on Stabilizing GANs Training: Direct Adversarial Training [J].
Li, Ziqiang ;
Xia, Pengfei ;
Tao, Rentuo ;
Niu, Hongjing ;
Li, Bin .
IEEE TRANSACTIONS ON EMERGING TOPICS IN COMPUTATIONAL INTELLIGENCE, 2023, 7 (01) :178-189
[29]   Between-Class Adversarial Training for Improving Adversarial Robustness of Image Classification [J].
Wang, Desheng ;
Jin, Weidong ;
Wu, Yunpu .
SENSORS, 2023, 23 (06)
[30]   MAT: A Multi-strength Adversarial Training Method to Mitigate Adversarial Attacks [J].
Song, Chang ;
Cheng, Hsin-Pai ;
Yang, Huanrui ;
Li, Sicheng ;
Wu, Chunpeng ;
Wu, Qing ;
Chen, Yiran ;
Li, Hai .
2018 IEEE COMPUTER SOCIETY ANNUAL SYMPOSIUM ON VLSI (ISVLSI), 2018, :476-481