On the Adversarial Robustness of Robust Estimators

被引:4
|
作者
Lai, Lifeng [1 ]
Bayraktar, Erhan [2 ]
机构
[1] Univ Calif Davis, Dept Elect & Comp Engn, Davis, CA 95616 USA
[2] Univ Michigan, Dept Math, Ann Arbor, MI 48104 USA
基金
美国国家科学基金会;
关键词
Robustness; Estimation; Optimization; Principal component analysis; Data analysis; Neural networks; Sociology; Robust estimators; adversarial robustness; M-estimator; non-convex optimization;
D O I
10.1109/TIT.2020.2985966
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Motivated by recent data analytics applications, we study the adversarial robustness of robust estimators. Instead of assuming that only a fraction of the data points are outliers as considered in the classic robust estimation setup, in this paper, we consider an adversarial setup in which an attacker can observe the whole dataset and can modify all data samples in an adversarial manner so as to maximize the estimation error caused by his attack. We characterize the attacker's optimal attack strategy, and further introduce adversarial influence function (AIF) to quantify an estimator's sensitivity to such adversarial attacks. We provide an approach to characterize AIF for any given robust estimator, and then design optimal estimator that minimizes AIF, which implies it is least sensitive to adversarial attacks and hence is most robust against adversarial attacks. From this characterization, we identify a tradeoff between AIF (i.e., robustness against adversarial attack) and influence function, a quantity used in classic robust estimators to measure robustness against outliers, and design estimators that strike a desirable tradeoff between these two quantities.
引用
收藏
页码:5097 / 5109
页数:13
相关论文
共 50 条
  • [1] On the Adversarial Robustness of Subspace Learning
    Li, Fuwei
    Lai, Lifeng
    Cui, Shuguang
    IEEE TRANSACTIONS ON SIGNAL PROCESSING, 2020, 68 (68) : 1470 - 1483
  • [2] On the Adversarial Robustness of Hypothesis Testing
    Jin, Yulu
    Lai, Lifeng
    IEEE TRANSACTIONS ON SIGNAL PROCESSING, 2021, 69 : 515 - 530
  • [3] Exploring Robust Features for Improving Adversarial Robustness
    Wang, Hong
    Deng, Yuefan
    Yoo, Shinjae
    Lin, Yuewei
    IEEE TRANSACTIONS ON CYBERNETICS, 2024, 54 (09) : 5141 - 5151
  • [4] Enhancing Adversarial Robustness via Stochastic Robust Framework
    Sun, Zhenjiang
    Li, Yuanbo
    Hu, Cong
    PATTERN RECOGNITION AND COMPUTER VISION, PRCV 2023, PT IV, 2024, 14428 : 187 - 198
  • [5] Improving Adversarial Robustness With Adversarial Augmentations
    Chen, Chuanxi
    Ye, Dengpan
    He, Yiheng
    Tang, Long
    Xu, Yue
    IEEE INTERNET OF THINGS JOURNAL, 2024, 11 (03) : 5105 - 5117
  • [6] An interpretable adversarial robustness evaluation method based on robust paths
    Li, Zituo
    Sun, Jianbin
    Yao, Xuemei
    Cui, Ruijing
    Ge, Bingfeng
    Yang, Kewei
    PROCEEDINGS OF THE 2024 27 TH INTERNATIONAL CONFERENCE ON COMPUTER SUPPORTED COOPERATIVE WORK IN DESIGN, CSCWD 2024, 2024, : 1213 - 1218
  • [7] ON THE ADVERSARIAL ROBUSTNESS OF LINEAR REGRESSION
    Li, Fuwei
    Lai, Lifeng
    Cui, Shuguang
    PROCEEDINGS OF THE 2020 IEEE 30TH INTERNATIONAL WORKSHOP ON MACHINE LEARNING FOR SIGNAL PROCESSING (MLSP), 2020,
  • [8] ON THE ADVERSARIAL ROBUSTNESS OF SUBSPACE LEARNING
    Li, Fuwei
    Lai, Lifeng
    Cui, Shuguang
    2019 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2019, : 2477 - 2481
  • [9] Robustness-via-synthesis: Robust training with generative adversarial perturbations
    Baytas, Inci M.
    Deb, Debayan
    NEUROCOMPUTING, 2023, 516 : 49 - 60
  • [10] Rethinking Adaptive Computing: Building a Unified Model Complexity-Reduction Framework With Adversarial Robustness
    Wang, Meiqi
    He, Liulu
    Lin, Jun
    Wang, Zhongfeng
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2022, 33 (04) : 1803 - 1810