Adversarial Attack Against Convolutional Neural Network via Gradient Approximation

被引:0
作者
Wang, Zehao [1 ]
Li, Xiaoran [2 ]
机构
[1] Tiangong Univ, Sch Software, Tianjin, Peoples R China
[2] Xiamen Univ, Sch Elect Sci & Engn, Xiamen, Peoples R China
来源
ADVANCED INTELLIGENT COMPUTING TECHNOLOGY AND APPLICATIONS, PT VI, ICIC 2024 | 2024年 / 14867卷
关键词
Adversarial Attack; Image Classification; Convolutional Neural Network; Gradient Approximation;
D O I
10.1007/978-981-97-5597-4_19
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
At present, convolutional neural networks (CNNs) have become an essential method for image recognition tasks, owing to their remarkable accuracy and efficiency. However, the susceptibility of convolutional neural networks to adversarial attacks, where slight, indiscernible alterations to input images can lead to misclassifications, poses significant security concerns. This work proposes a novel adversarial attack strategy against convolutional neural networks through the approximation of gradients, which was previously constrained by the opaqueness of gradient information within deep learning models. Specifically, our approach leverages a sophisticated optimization algorithm to approximate the gradient direction and magnitude, which can assist the generation of adversarial samples even in scenarios where direct access to the model's gradients is unavailable. From our extensive experiments, we can observe that our proposed model can significantly reduce the classification accuracy and maintain the perceptual indistinguishability of adversarial samples from their original counterparts.
引用
收藏
页码:221 / 232
页数:12
相关论文
共 26 条
  • [1] Croce F, 2021, PR MACH LEARN RES, V139
  • [2] Benchmarking Adversarial Robustness on Image Classification
    Dong, Yinpeng
    Fu, Qi-An
    Yang, Xiao
    Pang, Tianyu
    Su, Hang
    Xiao, Zihao
    Zhu, Jun
    [J]. 2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2020, : 318 - 328
  • [3] Evading Defenses to Transferable Adversarial Examples by Translation-Invariant Attacks
    Dong, Yinpeng
    Pang, Tianyu
    Su, Hang
    Zhu, Jun
    [J]. 2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, : 4307 - 4316
  • [4] Boosting Adversarial Attacks with Momentum
    Dong, Yinpeng
    Liao, Fangzhou
    Pang, Tianyu
    Su, Hang
    Zhu, Jun
    Hu, Xiaolin
    Li, Jianguo
    [J]. 2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, : 9185 - 9193
  • [5] AdvDrop: Adversarial Attack to DNNs by Dropping Information
    Duan, Ranjie
    Chen, Yuefeng
    Niu, Dantong
    Yang, Yun
    Qin, A. K.
    He, Yuan
    [J]. 2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, : 7486 - 7495
  • [6] Feng Y, 2020, AAAI CONF ARTIF INTE, V34, P10786
  • [7] Interpreting Adversarial Examples in Deep Learning: A Review
    Han, Sicong
    Lin, Chenhao
    Shen, Chao
    Wang, Qian
    Guan, Xiaohong
    [J]. ACM COMPUTING SURVEYS, 2023, 55 (14S)
  • [8] Goodfellow IJ, 2015, Arxiv, DOI [arXiv:1412.6572, 10.48550/arXiv.1412.6572]
  • [9] Adversarial Deep Learning: A Survey on Adversarial Attacks and Defense Mechanisms on Image Classification
    Khamaiseh, Samer Y.
    Bagagem, Derek
    Al-Alaj, Abdullah
    Mancino, Mathew
    Alomari, Hakam W.
    [J]. IEEE ACCESS, 2022, 10 : 102266 - 102291
  • [10] Kurakin A., 2018, Artificial Intelligence Safety and Security, P99