On the Adversarial Robustness of Robust Estimators

被引:4
|
作者
Lai, Lifeng [1 ]
Bayraktar, Erhan [2 ]
机构
[1] Univ Calif Davis, Dept Elect & Comp Engn, Davis, CA 95616 USA
[2] Univ Michigan, Dept Math, Ann Arbor, MI 48104 USA
基金
美国国家科学基金会;
关键词
Robustness; Estimation; Optimization; Principal component analysis; Data analysis; Neural networks; Sociology; Robust estimators; adversarial robustness; M-estimator; non-convex optimization;
D O I
10.1109/TIT.2020.2985966
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Motivated by recent data analytics applications, we study the adversarial robustness of robust estimators. Instead of assuming that only a fraction of the data points are outliers as considered in the classic robust estimation setup, in this paper, we consider an adversarial setup in which an attacker can observe the whole dataset and can modify all data samples in an adversarial manner so as to maximize the estimation error caused by his attack. We characterize the attacker's optimal attack strategy, and further introduce adversarial influence function (AIF) to quantify an estimator's sensitivity to such adversarial attacks. We provide an approach to characterize AIF for any given robust estimator, and then design optimal estimator that minimizes AIF, which implies it is least sensitive to adversarial attacks and hence is most robust against adversarial attacks. From this characterization, we identify a tradeoff between AIF (i.e., robustness against adversarial attack) and influence function, a quantity used in classic robust estimators to measure robustness against outliers, and design estimators that strike a desirable tradeoff between these two quantities.
引用
收藏
页码:5097 / 5109
页数:13
相关论文
共 50 条
  • [31] Analyzing the Robustness of Deep Learning Against Adversarial Examples
    Zhao, Jun
    2018 56TH ANNUAL ALLERTON CONFERENCE ON COMMUNICATION, CONTROL, AND COMPUTING (ALLERTON), 2018, : 1060 - 1064
  • [32] Adversarial Robustness of Deep Learning: Theory, Algorithms, and Applications
    Ruan, Wenjie
    Yi, Xinping
    Huang, Xiaowei
    PROCEEDINGS OF THE 30TH ACM INTERNATIONAL CONFERENCE ON INFORMATION & KNOWLEDGE MANAGEMENT, CIKM 2021, 2021, : 4866 - 4869
  • [33] Adversarial robustness via noise injection in smoothed models
    Nemcovsky, Yaniv
    Zheltonozhskii, Evgenii
    Baskin, Chaim
    Chmiel, Brian
    Bronstein, Alex M.
    Mendelson, Avi
    APPLIED INTELLIGENCE, 2023, 53 (08) : 9483 - 9498
  • [34] Adversarial robustness via noise injection in smoothed models
    Yaniv Nemcovsky
    Evgenii Zheltonozhskii
    Chaim Baskin
    Brian Chmiel
    Alex M. Bronstein
    Avi Mendelson
    Applied Intelligence, 2023, 53 : 9483 - 9498
  • [35] FASTEN: Fast Ensemble Learning for Improved Adversarial Robustness
    Huang, Lifeng
    Huang, Qiong
    Qiu, Peichao
    Wei, Shuxin
    Gao, Chengying
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2024, 19 : 2565 - 2580
  • [36] Adversarial Robustness on Image Classification With k-Means
    Omari, Rollin
    Kim, Junae
    Montague, Paul
    IEEE ACCESS, 2024, 12 : 28853 - 28859
  • [37] On robustness and efficiency of minimum divergence estimators
    Jiménez, R
    Shao, YZ
    TEST, 2001, 10 (02) : 241 - 248
  • [38] Robustness by Reweighting for Kernel Estimators: An Overview
    De Brabanter, Kris
    De Brabanter, Jos
    STATISTICAL SCIENCE, 2021, 36 (04) : 578 - 594
  • [39] On the robustness of two-stage estimators
    Zhelonkin, Mikhail
    Genton, Marc G.
    Ronchetti, Elvezio
    STATISTICS & PROBABILITY LETTERS, 2012, 82 (04) : 726 - 732
  • [40] On efficiency and robustness of estimators for a spherical location
    Kanika
    Kumar, Somesh
    STATISTICS, 2019, 53 (03) : 601 - 629