GReAT: A Graph Regularized Adversarial Training Method

被引:0
作者
Bayram, Samet [1 ]
Barner, Kenneth [1 ]
机构
[1] Univ Delaware, Elect & Comp Engn Dept, Newark, DE 19716 USA
关键词
Adversarial examples; adversarial learning; adversarial training; graph regularization; image classification; semi-supervised learning; robustness;
D O I
10.1109/ACCESS.2024.3395976
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
This paper presents GReAT (Graph Regularized Adversarial Training), a novel regularization method designed to enhance the robust classification performance of deep learning models. Adversarial examples, characterized by subtle perturbations that can mislead models, pose a significant challenge in machine learning. Although adversarial training is effective in defending against such attacks, it often overlooks the underlying data structure. In response, GReAT integrates graph-based regularization into the adversarial training process, leveraging the data's inherent structure to enhance model robustness. By incorporating graph information during training, GReAT defends against adversarial attacks and improves generalization to unseen data. Extensive evaluations on benchmark datasets demonstrate that GReAT outperforms state-of-the-art methods in robustness, achieving notable improvements in classification accuracy. Specifically, compared to the second-best methods, GReAT achieves a performance increase of approximately 4.87% for CIFAR-10 against FGSM attack and 10.57% for SVHN against FGSM attack. Additionally, for CIFAR-10, GReAT demonstrates a performance increase of approximately 11.05% against PGD attack, and for SVHN, a 5.54% increase against PGD attack. This paper provides detailed insights into the proposed methodology, including numerical results and comparisons with existing approaches, highlighting the significant impact of GReAT in advancing the performance of deep learning models.
引用
收藏
页码:63130 / 63141
页数:12
相关论文
共 49 条
[31]  
Mhaskar H, 2017, AAAI CONF ARTIF INTE, P2343
[32]   DeepFool: a simple and accurate method to fool deep neural networks [J].
Moosavi-Dezfooli, Seyed-Mohsen ;
Fawzi, Alhussein ;
Frossard, Pascal .
2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, :2574-2582
[33]  
Netzer Y., 2011, NIPS WORKSH DEEP LEA, V2011, P4
[34]   Distillation as a Defense to Adversarial Perturbations against Deep Neural Networks [J].
Papernot, Nicolas ;
McDaniel, Patrick ;
Wu, Xi ;
Jha, Somesh ;
Swami, Ananthram .
2016 IEEE SYMPOSIUM ON SECURITY AND PRIVACY (SP), 2016, :582-597
[35]   The Limitations of Deep Learning in Adversarial Settings [J].
Papernot, Nicolas ;
McDaniel, Patrick ;
Jha, Somesh ;
Fredrikson, Matt ;
Celik, Z. Berkay ;
Swami, Ananthram .
1ST IEEE EUROPEAN SYMPOSIUM ON SECURITY AND PRIVACY, 2016, :372-387
[36]   Adversarial Attacks and Defenses in Deep Learning [J].
Ren, Kui ;
Zheng, Tianhang ;
Qin, Zhan ;
Liu, Xue .
ENGINEERING, 2020, 6 (03) :346-360
[37]   Accessorize to a Crime: Real and Stealthy Attacks on State-of-the-Art Face Recognition [J].
Sharif, Mahmood ;
Bhagavatula, Sruti ;
Reiter, Michael K. ;
Bauer, Lujo .
CCS'16: PROCEEDINGS OF THE 2016 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, 2016, :1528-1540
[38]  
Szegedy C, 2014, Arxiv, DOI arXiv:1312.6199
[39]  
TramŠr F, 2017, Arxiv, DOI arXiv:1704.03453
[40]  
van der Maaten L, 2008, J MACH LEARN RES, V9, P2579