On the Adversarial Robustness of Decision Trees and a Symmetry Defense

被引:0
|
作者
Lindqvist, Blerta [1 ]
机构
[1] Aalto Univ, Dept Comp Sci, Espoo 02150, Finland
来源
IEEE ACCESS | 2025年 / 13卷
关键词
Perturbation methods; Robustness; Training; Boosting; Accuracy; Threat modeling; Diabetes; Decision trees; Current measurement; Closed box; Adversarial perturbation attacks; adversarial robustness; equivariance; gradient-boosting decision trees; invariance; symmetry defense; XGBoost;
D O I
10.1109/ACCESS.2025.3530695
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Gradient-boosting decision tree classifiers (GBDTs) are susceptible to adversarial perturbation attacks that change inputs slightly to cause misclassification. GBDTs are customarily used on non-image datasets that lack inherent symmetries, which might be why data symmetry in the context of GBDT classifiers has not received much attention. In this paper, we show that GBDTs can classify symmetric samples differently, which means that GBDTs lack invariance with respect to symmetry. Based on this, we defend GBDTs against adversarial perturbation attacks using symmetric adversarial samples in order to obtain correct classification. We apply and evaluate the symmetry defense against six adversarial perturbation attacks on the GBDT classifiers of nine datasets with a threat model that ranges from zero-knowledge to perfect-knowledge adversaries. Against zero-knowledge adversaries, we use the feature inversion symmetry and exceed the accuracies of default and robust classifiers by up to 100% points. Against perfect-knowledge adversaries for the GBDT classifier of the F-MNIST dataset, we use the feature inversion and horizontal flip symmetries and exceed the accuracies of default and robust classifiers by up to 96% points. Finally, we show that the current definition of adversarial robustness based on the minimum perturbation values of misclassifying adversarial samples might be inadequate for two reasons. First, this definition assumes that attacks mostly succeed, failing to consider the case when attacks are unable to construct misclassifying adversarial samples against a classifier. Second, GBDT adversarial robustness as currently defined can decrease by training with additional samples, even training samples, which counters the common wisdom that more training samples should increase robustness. With the current definition of GBDT adversarial robustness, we can make GBDTs more adversarially robust by training them with fewer samples! The code is publicly available at https://github.com/blertal/xgboost-symmetry-defense.
引用
收藏
页码:16120 / 16132
页数:13
相关论文
共 50 条
  • [1] Improving Adversarial Robustness With Adversarial Augmentations
    Chen, Chuanxi
    Ye, Dengpan
    He, Yiheng
    Tang, Long
    Xu, Yue
    IEEE INTERNET OF THINGS JOURNAL, 2024, 11 (03) : 5105 - 5117
  • [2] Detection and Defense: Student-Teacher Network for Adversarial Robustness
    Park, Kyoungchan
    Kang, Pilsung
    IEEE ACCESS, 2024, 12 : 82742 - 82752
  • [3] Robustness analysis of classical and fuzzy decision trees under adversarial evasion attack
    Chan, Patrick P. K.
    Zheng, Juan
    Liu, Han
    Tsang, E. C. C.
    Yeung, Daniel S.
    APPLIED SOFT COMPUTING, 2021, 107
  • [4] Genetic Adversarial Training of Decision Trees
    Ranzato, Francesco
    Zanella, Marco
    PROCEEDINGS OF THE 2021 GENETIC AND EVOLUTIONARY COMPUTATION CONFERENCE (GECCO'21), 2021, : 358 - 367
  • [5] Adversarial Defense on Harmony: Reverse Attack for Robust AI Models Against Adversarial Attacks
    Kim, Yebon
    Jung, Jinhyo
    Kim, Hyunjun
    So, Hwisoo
    Ko, Yohan
    Shrivastava, Aviral
    Lee, Kyoungwoo
    Hwang, Uiwon
    IEEE ACCESS, 2024, 12 : 176485 - 176497
  • [6] Exploring the Adversarial Frontier: Quantifying Robustness via Adversarial Hypervolume
    Guo, Ping
    Gong, Cheng
    Lin, Xi
    Yang, Zhiyuan
    Zhang, Qingfu
    IEEE TRANSACTIONS ON EMERGING TOPICS IN COMPUTATIONAL INTELLIGENCE, 2025, 9 (02): : 1367 - 1378
  • [7] IDEA: Invariant defense for graph adversarial robustness
    Tao, Shuchang
    Cao, Qi
    Shen, Huawei
    Wu, Yunfan
    Xu, Bingbing
    Cheng, Xueqi
    INFORMATION SCIENCES, 2024, 680
  • [8] On the Importance of Backbone to the Adversarial Robustness of Object Detectors
    Li, Xiao
    Chen, Hang
    Hu, Xiaolin
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2025, 20 : 2387 - 2398
  • [9] Proving Data-Poisoning Robustness in Decision Trees
    Drews, Samuel
    Albarghouthi, Aws
    D'Antoni, Loris
    PROCEEDINGS OF THE 41ST ACM SIGPLAN CONFERENCE ON PROGRAMMING LANGUAGE DESIGN AND IMPLEMENTATION (PLDI '20), 2020, : 1083 - 1097
  • [10] Stylized Adversarial Defense
    Naseer, Muzammal
    Khan, Salman
    Hayat, Munawar
    Khan, Fahad Shahbaz
    Porikli, Fatih
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2023, 45 (05) : 6403 - 6414