Boosting Adversarial Training Using Robust Selective Data Augmentation

被引:0
作者
Bader Rasheed
Asad Masood Khattak
Adil Khan
Stanislav Protasov
Muhammad Ahmad
机构
[1] Innopolis University,Institute of Data Science and Artificial Intelligence
[2] University of Hull,School of Computer Science
[3] Zayed University,College of Technological Innovation
[4] National University of Computer and Emerging Sciences,Department of Computer Science
来源
International Journal of Computational Intelligence Systems | / 16卷
关键词
Deep neural networks; Robustness; Adversarial attacks; Data augmentation; Adversarial training;
D O I
暂无
中图分类号
学科分类号
摘要
Artificial neural networks are currently applied in a wide variety of fields, and they are near to achieving performance similar to humans in many tasks. Nevertheless, they are vulnerable to adversarial attacks in the form of a small intentionally designed perturbation, which could lead to misclassifications, making these models unusable, especially in applications where security is critical. The best defense against these attacks, so far, is adversarial training (AT), which improves the model’s robustness by augmenting the training data with adversarial examples. In this work, we show that the performance of AT can be further improved by employing the neighborhood of each adversarial example in the latent space to make additional targeted augmentations to the training data. More specifically, we propose a robust selective data augmentation (RSDA) approach to enhance the performance of AT. RSDA complements AT by inspecting the quality of the data from a robustness perspective and performing data transformation operations on specific neighboring samples of each adversarial sample in the latent space. We evaluate RSDA on MNIST and CIFAR-10 datasets with multiple adversarial attacks. Our experiments show that RSDA gives significantly better results than just AT on both adversarial and clean samples.
引用
收藏
相关论文
共 37 条
[1]  
Khattak A(2022)An efficient supervised machine learning technique for forecasting stock market trends EAI Springer Innov. Commun. Comput. 68 921-939
[2]  
Khan A(2021)Adversarial attacks on featureless deep learning malicious URLs detection Comput. Mater. Contin. 58 82-115
[3]  
Ullah H(2020)Explainable artificial intelligence (xai): concepts, taxonomies, opportunities and challenges toward responsible ai Inf. Fusion 6 1-48
[4]  
Asghar MU(2019)A survey on Image Data Augmentation for Deep Learning J. Big Data 31 9311-9321
[5]  
Arif A(2022)Adversarial attack and defense: a survey Electronics 29 141-142
[6]  
Kundi FM(2018)Representer point selection for explaining deep neural networks Adv. Neural Inf. Process. Syst. 2018 8778-8788
[7]  
Asghar MZ(2012)The MNIST database of handwritten digit images for machine learning research IEEE Signal Process. Mag. undefined undefined-undefined
[8]  
Rasheed B(2018)Generalized cross entropy loss for training deep neural networks with noisy labels Adv. Neural Inf. Process. Syst. undefined undefined-undefined
[9]  
Khan A(undefined)undefined undefined undefined undefined-undefined
[10]  
Kazmi SMA(undefined)undefined undefined undefined undefined-undefined