Sample-analysis based adversarial attack with saliency map

被引:0
作者
Zhang, Dian [1 ]
Dong, Yunwei [2 ]
Yang, Yun [3 ]
机构
[1] Northwestern Polytech Univ, Sch Comp Sci, Xian, Peoples R China
[2] Northwestern Polytech Univ, Sch Software, Xian, Peoples R China
[3] Swinburne Univ Technol, Dept Comp Technol, Hawthorn, Australia
关键词
Deep learning; Vulnerability; Robustness evaluation; Sample analysis; Adversarial examples;
D O I
10.1016/j.asoc.2024.111733
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
With the widespread application of deep learning, the vulnerability of neural networks has attracted considerable attention, raising reliability and security concerns. Therefore, research on the robustness of neural networks has become increasingly critical. In this paper, we propose a novel sample-analysis based robustness evaluation method that overcomes the drawbacks of existing techniques, such as solving difficulty, single strategy, and loose radius. Our algorithm comprises two parts: robustness evaluation and adversarial attacks. Specifically, we introduce formal definitions of multiple sample types and a general solution to the problem of adversarial samples. We formulate a disturbance model-based description of adversarial samples in the adversarial attack algorithm and utilize saliency map to solve them. Our experimental results demonstrate that our adversarial attack algorithm not only achieves a high attack success rate in a relatively small disturbance range but also generates multiple adversarial examples for each clean example. Our algorithm can evaluate the robustness of complex datasets and models, overcome the lack of a single strategy in solving adversarial examples, and provide a more accurate radius of robustness.
引用
收藏
页数:15
相关论文
共 50 条
[1]   Sparse Simultaneous Recurrent Deep Learning for Robust Facial Expression Recognition [J].
Alam, Mahbubul ;
Vidyaratne, Lasitha S. ;
Iftekharuddin, Khan M. .
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2018, 29 (10) :4905-4916
[2]  
Brendel W., 2018, 6th ICLR
[3]  
Bunel R, 2018, ADV NEUR IN, V31
[4]   A Cross-Entropy-Guided Measure (CEGM) for Assessing Speech Recognition Performance and Optimizing DNN-Based Speech Enhancement [J].
Chai, Li ;
Du, Jun ;
Liu, Qing-Feng ;
Lee, Chin-Hui .
IEEE-ACM TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING, 2021, 29 :106-117
[5]  
Croce F, 2020, PR MACH LEARN RES, V119
[6]   Saliency Attack: Towards Imperceptible Black-box Adversarial Attack [J].
Dai, Zeyu ;
Liu, Shengcai ;
Li, Qing ;
Tang, Ke .
ACM TRANSACTIONS ON INTELLIGENT SYSTEMS AND TECHNOLOGY, 2023, 14 (03)
[7]   Adversarial Sample Attack and Defense Method for Encrypted Traffic Data [J].
Ding, Yi ;
Zhu, Guiqin ;
Chen, Dajiang ;
Qin, Xue ;
Cao, Mingsheng ;
Qin, Zhiguang .
IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2022, 23 (10) :18024-18039
[8]   Stealthy dynamic backdoor attack against neural networks for image classification [J].
Dong, Liang ;
Qiu, Jiawei ;
Fu, Zhongwang ;
Chen, Leiyang ;
Cui, Xiaohui ;
Shen, Zhidong .
APPLIED SOFT COMPUTING, 2023, 149
[9]   A3CMal: Generating adversarial samples to force targeted misclassification by reinforcement learning [J].
Fang, Zhiyang ;
Wang, Junfeng ;
Geng, Jiaxuan ;
Zhou, Yingjie ;
Kan, Xuan .
APPLIED SOFT COMPUTING, 2021, 109
[10]   Robust and Generalized Physical Adversarial Attacks via Meta-GAN [J].
Feng, Weiwei ;
Xu, Nanqing ;
Zhang, Tianzhu ;
Wu, Baoyuan ;
Zhang, Yongdong .
IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2024, 19 :1112-1125