Robustness evaluation of image classification models based on edge features: Tight and Non-Tight boundaries

被引:0
作者
Lu, Hui [1 ]
Mei, Ting [1 ]
Wang, Shiqi [1 ]
Zhang, Ruoliu [1 ]
Mao, Kefei [2 ]
机构
[1] Beihang Univ, Sch Elect & Informat Engn, Beijing 100191, Peoples R China
[2] Beihang Univ, Inst Unmanned Syst, Beijing 100191, Peoples R China
关键词
Adversarial attacks; Edge features; Robustness evaluating; Non-tight boundary; Tight boundary; ART;
D O I
10.1016/j.neucom.2025.129378
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
State-of-the-art deep neural networks (DNNs) are susceptible to adversarial images, posing a significant threat to safety-critical tasks. The white-box attack is an effective method for testing the robustness of models. Compared to modifying the entire image, perturbing specific image features that impact model decisions while remaining imperceptible to human eyes is more threatening. Besides, more informative and accurate evaluation metrics are needed for robustness assessment. Therefore, this paper initially proposes a set of metrics to demonstrate the significance of edge features in image classification models and validate it across 3 different datasets. Drawing inspiration from the minimum perturbation distance and boundary thickness, we propose Non-Tight boundary width and Tight boundary width as robustness evaluation metrics based on statistical analysis. Finally, a method called E-IFGSM is proposed to generate robustness testing examples on both sides of the decision boundary against the edge feature. To validate the proposed robustness evaluating approach, extensive experiments are conducted on 14 CIFAR10 models with varying structures and training stages, obtaining interesting conclusions regarding the robustness of models. Additionally, the robustness evaluation method is versatile and also extended to the Deepfool method to evaluate the robustness of models from the perspective of entire images.
引用
收藏
页数:16
相关论文
共 53 条
  • [1] Ballester P, 2016, AAAI CONF ARTIF INTE, P1124
  • [2] Bethge M, 2019, Arxiv, DOI arXiv:1904.00760
  • [3] Cao Yangjie, 2022, 2022 2nd International Conference on Consumer Electronics and Computer Engineering (ICCECE), P723, DOI 10.1109/ICCECE54139.2022.9712755
  • [4] Towards Evaluating the Robustness of Neural Networks
    Carlini, Nicholas
    Wagner, David
    [J]. 2017 IEEE SYMPOSIUM ON SECURITY AND PRIVACY (SP), 2017, : 39 - 57
  • [5] Frequency constraint-based adversarial attack on deep neural networks for medical image classification
    Chen, Fang
    Wang, Jian
    Liu, Han
    Kong, Wentao
    Zhao, Zhe
    Ma, Longfei
    Liao, Hongen
    Zhang, Daoqiang
    [J]. COMPUTERS IN BIOLOGY AND MEDICINE, 2023, 164
  • [6] Amplitude-Phase Recombination: Rethinking Robustness of Convolutional Neural Networks in Frequency Domain
    Chen, Guangyao
    Peng, Peixi
    Ma, Li
    Li, Jia
    Du, Lin
    Tian, Yonghong
    [J]. 2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, : 448 - 457
  • [7] RayS: A Ray Searching Method for Hard-label Adversarial Attack
    Chen, Jinghui
    Gu, Quanquan
    [J]. KDD '20: PROCEEDINGS OF THE 26TH ACM SIGKDD INTERNATIONAL CONFERENCE ON KNOWLEDGE DISCOVERY & DATA MINING, 2020, : 1739 - 1747
  • [8] Chen PY, 2018, AAAI CONF ARTIF INTE, P10
  • [9] Croce F., 2021, NEURIPS
  • [10] Deng YP, 2020, Arxiv, DOI arXiv:2003.05549