Fast adversarial attacks to deep neural networks through gradual sparsification

被引:4
作者
Amini, Sajjad [1 ]
Heshmati, Alireza [1 ]
Ghaemmaghami, Shahrokh [1 ]
机构
[1] Sharif Univ Technol, Elect Res Inst ERI, Tehran, Iran
关键词
Robustness; Sparsity; Proximal operator; Targeted attack; Warm start; SELECTION;
D O I
10.1016/j.engappai.2023.107360
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Deep learning networks, emerging machine learning models that present beyond human-level performance in terms of accuracy, are critically vulnerable to adversarial attacks. This vulnerability limits the utilization of deep learning architecture in many real-world safety-critical applications such as autonomous vehicles, medical diagnosis, sensitive industrial systems, etc. Adversarial attacks are methods to measure the robustness of different architectures and can be used to evaluate the suitability of a model to be used in a safety-critical situation. White-box sparse adversarial attacks can unhide interesting features of deep learning networks by identifying critical elements in the input pattern to design black-box attacks. This motivated us to design a new algorithmic procedure to design sparse adversarial attacks to feed-forward neural networks based on sparsity regularization. The proposed method comes with the gradual sparsification which starts by designing a dense attack and prone it until a desired level of sparsity is attained. We evaluate the performance of the proposed algorithm in designing attacks on convolutional neural networks and attention-based architectures for image classification task using three non-smooth sparsity-promoting regularizers. Compared to the state-of-the-art sparse attack schemes, we show that the proposed method can significantly decrease the time needed to design the attack, while the perturbation distortion is unchanged or reduced in some cases.
引用
收藏
页数:11
相关论文
共 48 条
[1]   Threat of Adversarial Attacks on Deep Learning in Computer Vision: A Survey [J].
Akhtar, Naveed ;
Mian, Ajmal .
IEEE ACCESS, 2018, 6 :14410-14430
[2]   A New Framework to Train Autoencoders Through Non-Smooth Regularization [J].
Amini, Sajjad ;
Ghaemmaghami, Shahrokh .
IEEE TRANSACTIONS ON SIGNAL PROCESSING, 2019, 67 (07) :1860-1874
[3]  
[Anonymous], 2009, Cifar-10
[4]   Convergence of descent methods for semi-algebraic and tame problems: proximal algorithms, forward-backward splitting, and regularized Gauss-Seidel methods [J].
Attouch, Hedy ;
Bolte, Jerome ;
Svaiter, Benar Fux .
MATHEMATICAL PROGRAMMING, 2013, 137 (1-2) :91-129
[5]   Vehicle Detection From UAV Imagery With Deep Learning: A Review [J].
Bouguettaya, Abdelmalek ;
Zarzour, Hafed ;
Kechida, Ahmed ;
Taberkit, Amine Mohammed .
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2022, 33 (11) :6047-6067
[6]   Towards Evaluating the Robustness of Neural Networks [J].
Carlini, Nicholas ;
Wagner, David .
2017 IEEE SYMPOSIUM ON SECURITY AND PRIVACY (SP), 2017, :39-57
[7]  
Cisse M, 2017, PR MACH LEARN RES, V70
[8]   Sparse and Imperceivable Adversarial Attacks [J].
Croce, Francesco ;
Hein, Matthias .
2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019), 2019, :4723-4731
[9]   Differential Evolution: A Survey of the State-of-the-Art [J].
Das, Swagatam ;
Suganthan, Ponnuthurai Nagaratnam .
IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, 2011, 15 (01) :4-31
[10]  
Deng J, 2009, PROC CVPR IEEE, P248, DOI 10.1109/CVPRW.2009.5206848