Enhancing the transferability of adversarial samples with random noise techniques

被引:2
|
作者
Huang, Jiahao [1 ]
Wen, Mi [1 ]
Wei, Minjie [1 ]
Bi, Yanbing [2 ]
机构
[1] Shanghai Univ Elect Power, Coll Comp Sci & Technol, Shanghai 201306, Peoples R China
[2] State Grid info & Telecom Grp, Beijing 100000, Peoples R China
基金
中国国家自然科学基金;
关键词
Deep learning; Adversarial samples; Adversarial attack; Adversarial transferability; DNN security; ARCHITECTURES;
D O I
10.1016/j.cose.2023.103541
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Deep neural networks have achieved remarkable success in the field of computer vision. However, they are susceptible to adversarial attacks. The transferability of adversarial samples has made practical black-box attacks feasible, underscoring the importance of research on transferability. Existing work indicates that adversarial samples tend to overfit to the source model, getting trapped in local optima, thereby reducing the transferability of adversarial samples. To address this issue, we propose the Random Noise Transfer Attack (RNTA) to search for adversarial samples in a larger data distribution, seeking the global optimum. Specifically, we suggest injecting multiple random noise perturbations into the sample before each iteration of sample optimization, effectively exploring the decision boundary within an extended data distribution space. By aggregating gradients, we identify a better global optimum, mitigating the issue of overfitting to the source model. Through extensive experiments on the large-scale visual classification task on ImageNet, we demonstrate that our method increases the success rate of momentum-based attacks by an average of 20.1%. Furthermore, our approach can be combined with existing attack methods, achieving a success rate of 94.3%, which highlights the insecurity of current models and defense mechanisms.
引用
收藏
页数:12
相关论文
共 50 条
  • [41] Enhancing visual adversarial transferability via affine transformation of intermediate-level perturbations
    Li, Qizhang
    Guo, Yiwen
    Zuo, Wangmeng
    PATTERN RECOGNITION LETTERS, 2025, 191 : 51 - 57
  • [42] Enhancing Adversarial Transferability With Intermediate Layer Feature Attack on Synthetic Aperture Radar Images
    Wan, Xuanshen
    Liu, Wei
    Niu, Chaoyang
    Lu, Wanjie
    Li, Yuanli
    IEEE JOURNAL OF SELECTED TOPICS IN APPLIED EARTH OBSERVATIONS AND REMOTE SENSING, 2025, 18 : 1638 - 1655
  • [43] Ranking the Transferability of Adversarial Examples
    Levy, Moshe
    Amit, Guy
    Elovici, Yuval
    Mirsky, Yisroel
    ACM TRANSACTIONS ON INTELLIGENT SYSTEMS AND TECHNOLOGY, 2024, 15 (05)
  • [44] Improving the Transferability of Adversarial Samples through Automatically Learning Augmentation Strategies from Data
    Xu, Ru-Zhi
    Lyu, Chang-Ran
    International Journal of Network Security, 2023, 25 (06) : 983 - 991
  • [45] Exploring Transferability on Adversarial Attacks
    Alvarez, Enrique
    Alvarez, Rafael
    Cazorla, Miguel
    IEEE ACCESS, 2023, 11 : 105545 - 105556
  • [46] On the Adversarial Transferability of ConvMixer Models
    Iijima, Ryota
    Tanaka, Miki
    Echizen, Isao
    Kiya, Hitoshi
    PROCEEDINGS OF 2022 ASIA-PACIFIC SIGNAL AND INFORMATION PROCESSING ASSOCIATION ANNUAL SUMMIT AND CONFERENCE (APSIPA ASC), 2022, : 1826 - 1830
  • [47] Boosting the transferability of adversarial CAPTCHAs
    Xu, Zisheng
    Yan, Qiao
    COMPUTERS & SECURITY, 2024, 145
  • [48] Boosting the Transferability of Adversarial Attacks with Reverse Adversarial Perturbation
    Qin, Zeyu
    Fan, Yanbo
    Liu, Yi
    Shen, Li
    Zhang, Yong
    Wang, Jue
    Wu, Baoyuan
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,
  • [49] DIB-UAP: enhancing the transferability of universal adversarial perturbation via deep information bottleneck
    Wang, Yang
    Zheng, Yunfei
    Chen, Lei
    Yang, Zhen
    Cao, Tieyong
    COMPLEX & INTELLIGENT SYSTEMS, 2024, 10 (05) : 6825 - 6837
  • [50] SCANNING TECHNIQUES FOR RANDOM NOISE TESTING
    CHAPMAN, CP
    JOURNAL OF ENVIRONMENTAL SCIENCES, 1968, 11 (01): : 27 - &