Multiple Adversarial Domains Adaptation Approach for Mitigating Adversarial Attacks Effects

被引:3
作者
Rasheed, Bader [1 ]
Khan, Adil [1 ]
Ahmad, Muhammad [2 ]
Mazzara, Manuel [3 ]
Kazmi, S. M. Ahsan [4 ]
机构
[1] Innopolis Univ, Inst Data Sci & Artificial Intelligence, Innopolis, Russia
[2] Natl Univ Comp & Emerging Sci, Dept Comp Sci, Islamabad, Pakistan
[3] Innopolis Univ, Inst Software Dev & Engn, Innopolis, Russia
[4] Univ West England, Fac Comp Sci & Creat Technol, Bristol, Avon, England
关键词
AWARE;
D O I
10.1155/2022/2890761
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Although neural networks are near achieving performance similar to humans in many tasks, they are susceptible to adversarial attacks in the form of a small, intentionally designed perturbation, which could lead to misclassifications. The best defense against these attacks, so far, is adversarial training (AT), which improves a model's robustness by augmenting the training data with adversarial examples. However, AT usually decreases the model's accuracy on clean samples and could overfit to a specific attack, inhibiting its ability to generalize to new attacks. In this paper, we investigate the usage of domain adaptation to enhance AT's performance. We propose a novel multiple adversarial domain adaptation (MADA) method, which looks at this problem as a domain adaptation task to discover robust features. Specifically, we use adversarial learning to learn features that are domain-invariant between multiple adversarial domains and the clean domain. We evaluated MADA on MNIST and CIFAR-10 datasets with multiple adversarial attacks during training and testing. The results of our experiments show that MADA is superior to AT on adversarial samples by about 4% on average and on clean samples by about 1% on average.
引用
收藏
页数:11
相关论文
共 37 条
[1]  
Barnett SA, 2018, Arxiv, DOI arXiv:1806.11382
[2]   Hyperspectral Image Classification-Traditional to Deep Models: A Survey for Future Prospects [J].
Ahmad, Muhammad ;
Shabbir, Sidrah ;
Roy, Swalpa Kumar ;
Hong, Danfeng ;
Wu, Xin ;
Yao, Jing ;
Khan, Adil Mehmood ;
Mazzara, Manuel ;
Distefano, Salvatore ;
Chanussot, Jocelyn .
IEEE JOURNAL OF SELECTED TOPICS IN APPLIED EARTH OBSERVATIONS AND REMOTE SENSING, 2022, 15 :968-999
[3]  
Alvarenga e Silva L.F., 2021, IMPROVING TRANSFERAB
[4]  
[Anonymous], 2013, P INT C LEARN REPR S
[5]  
[Anonymous], 2005, Lectures on Lipschitz Analysis
[6]  
Arjovsky M, 2017, Arxiv, DOI [arXiv:1701.07875, DOI 10.48550/ARXIV.1701.07875]
[7]   Adversarial Reconstruction Loss for Domain Generalization [J].
Bekkouch, Imad Eddine Ibrahim ;
Nicolae, Dragos Constantin ;
Khan, Adil ;
Kazmi, S. M. Ahsan ;
Khattak, Asad Masood ;
Ibragimov, Bulat .
IEEE ACCESS, 2021, 9 :42424-42437
[8]   Deep Generative Modelling: A Comparative Review of VAEs, GANs, Normalizing Flows, Energy-Based and Autoregressive Models [J].
Bond-Taylor, Sam ;
Leach, Adam ;
Long, Yang ;
Willcocks, Chris G. .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2022, 44 (11) :7327-7347
[9]   Towards Evaluating the Robustness of Neural Networks [J].
Carlini, Nicholas ;
Wagner, David .
2017 IEEE SYMPOSIUM ON SECURITY AND PRIVACY (SP), 2017, :39-57
[10]  
Demontis A, 2019, PROCEEDINGS OF THE 28TH USENIX SECURITY SYMPOSIUM, P321