On the Adversarial Robustness of Decision Trees and a Symmetry Defense

被引:0
|
作者
Lindqvist, Blerta [1 ]
机构
[1] Aalto Univ, Dept Comp Sci, Espoo 02150, Finland
来源
IEEE ACCESS | 2025年 / 13卷
关键词
Perturbation methods; Robustness; Training; Boosting; Accuracy; Threat modeling; Diabetes; Decision trees; Current measurement; Closed box; Adversarial perturbation attacks; adversarial robustness; equivariance; gradient-boosting decision trees; invariance; symmetry defense; XGBoost;
D O I
10.1109/ACCESS.2025.3530695
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Gradient-boosting decision tree classifiers (GBDTs) are susceptible to adversarial perturbation attacks that change inputs slightly to cause misclassification. GBDTs are customarily used on non-image datasets that lack inherent symmetries, which might be why data symmetry in the context of GBDT classifiers has not received much attention. In this paper, we show that GBDTs can classify symmetric samples differently, which means that GBDTs lack invariance with respect to symmetry. Based on this, we defend GBDTs against adversarial perturbation attacks using symmetric adversarial samples in order to obtain correct classification. We apply and evaluate the symmetry defense against six adversarial perturbation attacks on the GBDT classifiers of nine datasets with a threat model that ranges from zero-knowledge to perfect-knowledge adversaries. Against zero-knowledge adversaries, we use the feature inversion symmetry and exceed the accuracies of default and robust classifiers by up to 100% points. Against perfect-knowledge adversaries for the GBDT classifier of the F-MNIST dataset, we use the feature inversion and horizontal flip symmetries and exceed the accuracies of default and robust classifiers by up to 96% points. Finally, we show that the current definition of adversarial robustness based on the minimum perturbation values of misclassifying adversarial samples might be inadequate for two reasons. First, this definition assumes that attacks mostly succeed, failing to consider the case when attacks are unable to construct misclassifying adversarial samples against a classifier. Second, GBDT adversarial robustness as currently defined can decrease by training with additional samples, even training samples, which counters the common wisdom that more training samples should increase robustness. With the current definition of GBDT adversarial robustness, we can make GBDTs more adversarially robust by training them with fewer samples! The code is publicly available at https://github.com/blertal/xgboost-symmetry-defense.
引用
收藏
页码:16120 / 16132
页数:13
相关论文
共 50 条
  • [21] Enhancing Adversarial Robustness for High-Speed Train Bogie Fault Diagnosis Based on Adversarial Training and Residual Perturbation Inversion
    Wang, Desheng
    Jin, Weidong
    Wu, Yunpu
    Ren, Junxiao
    IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, 2024, 20 (05) : 7608 - 7618
  • [22] On the Adversarial Robustness of Hypothesis Testing
    Jin, Yulu
    Lai, Lifeng
    IEEE TRANSACTIONS ON SIGNAL PROCESSING, 2021, 69 : 515 - 530
  • [23] On the Adversarial Robustness of Subspace Learning
    Li, Fuwei
    Lai, Lifeng
    Cui, Shuguang
    IEEE TRANSACTIONS ON SIGNAL PROCESSING, 2020, 68 (68) : 1470 - 1483
  • [24] On the Adversarial Robustness of Robust Estimators
    Lai, Lifeng
    Bayraktar, Erhan
    IEEE TRANSACTIONS ON INFORMATION THEORY, 2020, 66 (08) : 5097 - 5109
  • [25] Adversarial Robustness Via Fisher-Rao Regularization
    Picot, Marine
    Messina, Francisco
    Boudiaf, Malik
    Labeau, Fabrice
    Ayed, Ismail Ben
    Piantanida, Pablo
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2023, 45 (03) : 2698 - 2710
  • [26] Hadamard's Defense Against Adversarial Examples
    Hoyos, Angello
    Ruiz, Ubaldo
    Chavez, Edgar
    IEEE ACCESS, 2021, 9 : 118324 - 118333
  • [27] Adversarial Robustness on Image Classification With k-Means
    Omari, Rollin
    Kim, Junae
    Montague, Paul
    IEEE ACCESS, 2024, 12 : 28853 - 28859
  • [28] Cost-free adversarial defense: Distance-based optimization for model robustness without adversarial training
    Seo, Seungwan
    Lee, Yunseung
    Kang, Pilsung
    COMPUTER VISION AND IMAGE UNDERSTANDING, 2023, 227
  • [29] Multi-Task Learning With Self-Defined Tasks for Adversarial Robustness of Deep Networks
    Hyun, Changhun
    Park, Hyeyoung
    IEEE ACCESS, 2024, 12 : 83248 - 83259
  • [30] Evaluating Adversarial Robustness of Secret Key-Based Defenses
    Ali, Ziad Tariq Muhammad
    Mohammed, Ameer
    Ahmad, Imtiaz
    IEEE ACCESS, 2022, 10 : 34872 - 34882