On the Adversarial Robustness of Robust Estimators

被引:4
作者
Lai, Lifeng [1 ]
Bayraktar, Erhan [2 ]
机构
[1] Univ Calif Davis, Dept Elect & Comp Engn, Davis, CA 95616 USA
[2] Univ Michigan, Dept Math, Ann Arbor, MI 48104 USA
基金
美国国家科学基金会;
关键词
Robustness; Estimation; Optimization; Principal component analysis; Data analysis; Neural networks; Sociology; Robust estimators; adversarial robustness; M-estimator; non-convex optimization;
D O I
10.1109/TIT.2020.2985966
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Motivated by recent data analytics applications, we study the adversarial robustness of robust estimators. Instead of assuming that only a fraction of the data points are outliers as considered in the classic robust estimation setup, in this paper, we consider an adversarial setup in which an attacker can observe the whole dataset and can modify all data samples in an adversarial manner so as to maximize the estimation error caused by his attack. We characterize the attacker's optimal attack strategy, and further introduce adversarial influence function (AIF) to quantify an estimator's sensitivity to such adversarial attacks. We provide an approach to characterize AIF for any given robust estimator, and then design optimal estimator that minimizes AIF, which implies it is least sensitive to adversarial attacks and hence is most robust against adversarial attacks. From this characterization, we identify a tradeoff between AIF (i.e., robustness against adversarial attack) and influence function, a quantity used in classic robust estimators to measure robustness against outliers, and design estimators that strike a desirable tradeoff between these two quantities.
引用
收藏
页码:5097 / 5109
页数:13
相关论文
共 50 条
  • [41] On robustness and efficiency of minimum divergence estimators
    Raúl Jiménz
    Yongzhao Shao
    Test, 2001, 10 : 241 - 248
  • [42] Pareto adversarial robustness: balancing spatial robustness and sensitivity-based robustness
    Ke Sun
    Mingjie Li
    Zhouchen Lin
    Science China Information Sciences, 2025, 68 (6)
  • [43] CLASS OF ROBUST ESTIMATORS
    POLLAK, M
    COMMUNICATIONS IN STATISTICS PART A-THEORY AND METHODS, 1979, 8 (06): : 509 - 531
  • [44] Understanding adversarial robustness against on-manifold adversarial examples
    Xiao, Jiancong
    Yang, Liusha
    Fan, Yanbo
    Wang, Jue
    Luo, Zhi-Quan
    PATTERN RECOGNITION, 2025, 159
  • [45] Improving Adversarial Robustness via Attention and Adversarial Logit Pairing
    Li, Xingjian
    Goodman, Dou
    Liu, Ji
    Wei, Tao
    Dou, Dejing
    FRONTIERS IN ARTIFICIAL INTELLIGENCE, 2022, 4
  • [46] Toward Robust Discriminative Projections Learning Against Adversarial Patch Attacks
    Wang, Zheng
    Nie, Feiping
    Wang, Hua
    Huang, Heng
    Wang, Fei
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2024, 35 (12) : 18784 - 18798
  • [47] Evaluating Adversarial Robustness of Secret Key-Based Defenses
    Ali, Ziad Tariq Muhammad
    Mohammed, Ameer
    Ahmad, Imtiaz
    IEEE ACCESS, 2022, 10 : 34872 - 34882
  • [48] Towards Improving Robustness of Deep Neural Networks to Adversarial Perturbations
    Amini, Sajjad
    Ghaemmaghami, Shahrokh
    IEEE TRANSACTIONS ON MULTIMEDIA, 2020, 22 (07) : 1889 - 1903
  • [49] SPLASH: Learnable activation functions for improving accuracy and adversarial robustness
    Tavakoli, Mohammadamin
    Agostinelli, Forest
    Baldi, Pierre
    NEURAL NETWORKS, 2021, 140 : 1 - 12
  • [50] Certifiable Robustness to Adversarial State Uncertainty in Deep Reinforcement Learning
    Everett, Michael
    Lutjens, Bjorn
    How, Jonathan P.
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2022, 33 (09) : 4184 - 4198