IMPROVING ROBUSTNESS OF DEEP NETWORKS USING CLUSTER-BASED ADVERSARIAL TRAINING

被引:0
作者
Rasheed, Bader [1 ]
Khan, Adil [1 ,2 ]
机构
[1] Innopolis Univ, Machine Learning & Knowledge Representat Lab, Innopolis, Russia
[2] Univ Hull, Sch Comp Sci, Kingston Upon Hull HU6 7RX, N Humberside, England
关键词
Deep neural networks; Adversarial attacks; robustness; adversarial training;
D O I
暂无
中图分类号
D9 [法律]; DF [法律];
学科分类号
0301 ;
摘要
Deep learning models have been found to be susceptible to adversarial attacks, which limits their use in security-sensitive applications. One way to enhance the resilience of these models is through adversarial training, which involves training them with intentionally crafted adversarial examples. This study introduces the idea of clustering-based adversarial training technique, with preliminary results and motivations. In this approach, rather than using adversarial instances directly, they are first grouped using various clustering algorithms and criteria, creating a new structured space for model training. The method's performance is evaluated on the MNIST dataset against different adversarial attacks, such as FGSM and PGD, with an examination of the accuracy-robustness trade-off. The results show that cluster-based adversarial training could be used as a data augmentation method to enhance the generalization in both clean and adversarial domains.
引用
收藏
页码:412 / 420
页数:9
相关论文
共 12 条
[1]   Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI [J].
Barredo Arrieta, Alejandro ;
Diaz-Rodriguez, Natalia ;
Del Ser, Javier ;
Bennetot, Adrien ;
Tabik, Siham ;
Barbado, Alberto ;
Garcia, Salvador ;
Gil-Lopez, Sergio ;
Molina, Daniel ;
Benjamins, Richard ;
Chatila, Raja ;
Herrera, Francisco .
INFORMATION FUSION, 2020, 58 :82-115
[2]  
Bengio, 2018, ICLR, P99, DOI DOI 10.1201/9781351251389-8
[3]   Towards Evaluating the Robustness of Neural Networks [J].
Carlini, Nicholas ;
Wagner, David .
2017 IEEE SYMPOSIUM ON SECURITY AND PRIVACY (SP), 2017, :39-57
[4]  
Duncombe J. U., 1959, IEEE T ELECTRON DEV, V11, P34, DOI DOI 10.1109/TED.2016.2628402
[5]  
Liang Hongshuo, 2022, ADVERSARIAL ATTACK D
[6]   Rotation, scale, and translation resilient watermarking for images [J].
Lin, CY ;
Wu, M ;
Bloom, JA ;
Cox, IJ ;
Miller, ML ;
Lui, YM .
IEEE TRANSACTIONS ON IMAGE PROCESSING, 2001, 10 (05) :767-782
[7]  
Madry A., 2017, ARXIV
[8]  
Raghunathan Aditi, 2018, 6 INT C LEARNING REP
[9]   Multiple Adversarial Domains Adaptation Approach for Mitigating Adversarial Attacks Effects [J].
Rasheed, Bader ;
Khan, Adil ;
Ahmad, Muhammad ;
Mazzara, Manuel ;
Kazmi, S. M. Ahsan .
INTERNATIONAL TRANSACTIONS ON ELECTRICAL ENERGY SYSTEMS, 2022, 2022
[10]   Adversarial Attacks on Featureless Deep Learning Malicious URLs Detection [J].
Rasheed, Bader ;
Khan, Adil ;
Kazmi, S. M. Ahsan ;
Hussain, Rasheed ;
Piran, Md Jalil ;
Suh, Doug Young .
CMC-COMPUTERS MATERIALS & CONTINUA, 2021, 68 (01) :921-939