Asymptotic Behavior of Adversarial Training in Binary Linear Classification

被引:1
作者
Taheri, Hossein [1 ]
Pedarsani, Ramtin [1 ]
Thrampoulidis, Christos [1 ,2 ]
机构
[1] Univ Calif Santa Barbara, Dept Elect & Comp Engn, Santa Barbara, CA 93106 USA
[2] Univ British Columbia, Dept Elect & Comp Engn, Vancouver, BC V6T 1Z4, Canada
基金
美国国家科学基金会;
关键词
Adversarial learning; adversarial training; high-dimensional statistics; optimization;
D O I
10.1109/TNNLS.2023.3290592
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Adversarial training using empirical risk minimization (ERM) is the state-of-the-art method for defense against adversarial attacks, that is, against small additive adversarial perturbations applied to test data leading to misclassification. Despite being successful in practice, understanding the generalization properties of adversarial training in classification remains widely open. In this article, we take the first step in this direction by precisely characterizing the robustness of adversarial training in binary linear classification. Specifically, we consider the high-dimensional regime where the model dimension grows with the size of the training set at a constant ratio. Our results provide exact asymptotics for both standard and adversarial test errors under general l(q)-norm bounded perturbations (q >= 1) in both discriminative binary models and generative Gaussian-mixture models with correlated features. We use our sharp error formulae to explain how the adversarial and standard errors depend upon the over-parameterization ratio, the data model, and the attack budget. Finally, by comparing with the robust Bayes estimator, our sharp asymptotics allow us to study the fundamental limits of adversarial training.
引用
收藏
页码:2004 / 2012
页数:9
相关论文
共 49 条
  • [1] Feature Purification: How Adversarial Training Performs Robust Deep Learning
    Allen-Zhu, Zeyuan
    Li, Yuanzhi
    [J]. 2021 IEEE 62ND ANNUAL SYMPOSIUM ON FOUNDATIONS OF COMPUTER SCIENCE (FOCS 2021), 2022, : 977 - 988
  • [2] Optimal M-estimation in high-dimensional regression
    Bean, Derek
    Bickel, Peter J.
    El Karoui, Noureddine
    Yu, Bin
    [J]. PROCEEDINGS OF THE NATIONAL ACADEMY OF SCIENCES OF THE UNITED STATES OF AMERICA, 2013, 110 (36) : 14563 - 14568
  • [3] Two Models of Double Descent for Weak Features
    Belkin, Mikhail
    Hsu, Daniel
    Xu, Ji
    [J]. SIAM JOURNAL ON MATHEMATICS OF DATA SCIENCE, 2020, 2 (04): : 1167 - 1180
  • [4] Reconciling modern machine-learning practice and the classical bias-variance trade-off
    Belkin, Mikhail
    Hsu, Daniel
    Ma, Siyuan
    Mandal, Soumik
    [J]. PROCEEDINGS OF THE NATIONAL ACADEMY OF SCIENCES OF THE UNITED STATES OF AMERICA, 2019, 116 (32) : 15849 - 15854
  • [5] Bhagoji AN, 2019, ADV NEUR IN, V32
  • [6] THE PHASE TRANSITION FOR THE EXISTENCE OF THE MAXIMUM LIKELIHOOD ESTIMATE IN HIGH-DIMENSIONAL LOGISTIC REGRESSION
    Candes, Emmanuel J.
    Sur, Pragya
    [J]. ANNALS OF STATISTICS, 2020, 48 (01) : 27 - 42
  • [7] Carmon Y, 2019, ADV NEUR IN, V32
  • [8] Celentano M., 2021, arXiv
  • [9] Celentano M, 2023, Arxiv, DOI arXiv:2007.13716
  • [10] FUNDAMENTAL BARRIERS TO HIGH-DIMENSIONAL REGRESSION WITH CONVEX PENALTIES
    Celentano, Michael
    Montanari, Andrea
    [J]. ANNALS OF STATISTICS, 2022, 50 (01) : 170 - 196