Data-dependent stability analysis of adversarial training

被引:0
作者
Wang, Yihan [1 ]
Liu, Shuang [1 ]
Gao, Xiao-Shan [1 ]
机构
[1] Univ Chinese Acad Sci, Beijing 101408, Peoples R China
关键词
On-average stability analysis; Generalization bound; Adversarial training; Stochastic gradient descent; Data poisoning attack;
D O I
10.1016/j.neunet.2024.106983
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Stability analysis is an essential aspect of studying the generalization ability of deep learning, as it involves deriving generalization bounds for stochastic gradient descent-based training algorithms. Adversarial training is the most widely used defense against adversarial attacks. However, previous generalization bounds for adversarial training have not included information regarding data distribution. In this paper, we fill this gap by providing generalization bounds for stochastic gradient descent-based adversarial training that incorporate data distribution information. We utilize the concepts of on-average stability and high-order approximate Lipschitz conditions to examine how changes in data distribution and adversarial budget can affect robust generalization gaps. Our derived generalization bounds for both convex and non-convex losses are at least as good as the uniform stability-based counterparts which do not include data distribution information. Furthermore, our findings demonstrate how distribution shifts from data poisoning attacks can impact robust generalization.
引用
收藏
页数:14
相关论文
共 51 条
  • [1] Allen-Zhu Z, 2019, PR MACH LEARN RES, V97
  • [2] Nguyen A, 2015, PROC CVPR IEEE, P427, DOI 10.1109/CVPR.2015.7298640
  • [3] Bassily Raef, 2020, Advances in Neural Information Processing Systems, V33
  • [4] Biggio Battista, 2013, Machine Learning and Knowledge Discovery in Databases. European Conference, ECML PKDD 2013. Proceedings: LNCS 8190, P387, DOI 10.1007/978-3-642-40994-3_25
  • [5] Stability and generalization
    Bousquet, O
    Elisseeff, A
    [J]. JOURNAL OF MACHINE LEARNING RESEARCH, 2002, 2 (03) : 499 - 526
  • [6] Chen T., 2020, INT C LEARN REPR
  • [7] Chen T., 2022, arXiv
  • [8] Adversarial Robustness: From Self-Supervised Pre-Training to Fine-Tuning
    Chen, Tianlong
    Liu, Sijia
    Chang, Shiyu
    Cheng, Yu
    Amini, Lisa
    Wang, Zhangyang
    [J]. 2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2020, : 696 - 705
  • [9] Du Simon S., 2019, P MACHINE LEARNING R, V97
  • [10] Farnia Farzan, 2021, PR MACH LEARN RES, V139