Defending Deep Learning Models Against Adversarial Attacks

被引:26
|
作者
Mani, Nag [1 ]
Moh, Melody [1 ]
Moh, Teng-Sheng [2 ]
机构
[1] San Jose State Univ, San Jose, CA 95192 USA
[2] San Jose State Univ, Dept Comp Sci, San Jose, CA 95192 USA
关键词
Adversarial Examples; Adversarial Retraining; Basic Interactive Method; DeepFool; Ensemble Defense; Fast Gradient Sign Method; Gradient-Based Attacks; Image Recognition; Securing Deep Learning;
D O I
10.4018/IJSSCI.2021010105
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Deep learning (DL) has been used globally in almost every sector of technology and society. Despite its huge success, DL models and applications have been susceptible to adversarial attacks, impacting the accuracy and integrity of these models. Many state-of-the-art models are vulnerable to attacks by well-crafted adversarial examples, which are perturbed versions of clean data with a small amount of noise added, imperceptible to the human eyes, and can quite easily fool the targeted model. This paper introduces six most effective gradient-based adversarial attacks on the ResNet image recognition model, and demonstrates the limitations of traditional adversarial retraining technique. The authors then present a novel ensemble defense strategy based on adversarial retraining technique. The proposed method is capable of withstanding the six adversarial attacks on cifar10 dataset with accuracy greater than 89.31% and as high as 96.24%. The authors believe the design methodologies and experiments demonstrated are widely applicable to other domains of machine learning, DL, and computation intelligence securities.
引用
收藏
页码:72 / 89
页数:18
相关论文
共 50 条
  • [1] Defending AI Models Against Adversarial Attacks in Smart Grids Using Deep Learning
    Sampedro, Gabriel Avelino
    Ojo, Stephen
    Krichen, Moez
    Alamro, Meznah A.
    Mihoub, Alaeddine
    Karovic, Vincent
    IEEE ACCESS, 2024, 12 : 157408 - 157417
  • [2] Temporal shuffling for defending deep action recognition models against adversarial attacks
    Hwang, Jaehui
    Zhang, Huan
    Choi, Jun-Ho
    Hsieh, Cho-Jui
    Lee, Jong-Seok
    NEURAL NETWORKS, 2024, 169 : 388 - 397
  • [3] Defending Against Adversarial Attacks in Deep Neural Networks
    You, Suya
    Kuo, C-C Jay
    ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING FOR MULTI-DOMAIN OPERATIONS APPLICATIONS, 2019, 11006
  • [4] Cascaded Defending and Detecting of Adversarial Attacks Against Deep Learning System in Ophthalmic Imaging
    Ng, Wei Yan
    Xu, Yanyu
    Xu, Xinxing
    Ting, Daniel S. W.
    INVESTIGATIVE OPHTHALMOLOGY & VISUAL SCIENCE, 2023, 64 (08)
  • [5] Defending against Deep-Learning-Based Flow Correlation Attacks with Adversarial Examples
    Zhang, Ziwei
    Ye, Dengpan
    Security and Communication Networks, 2022, 2022
  • [6] Defending Against Deep Learning-Based Traffic Fingerprinting Attacks With Adversarial Examples
    Hayden, Blake
    Walsh, Timothy
    Barton, Armon
    ACM TRANSACTIONS ON PRIVACY AND SECURITY, 2025, 28 (01)
  • [7] Defending against Deep-Learning-Based Flow Correlation Attacks with Adversarial Examples
    Zhang, Ziwei
    Ye, Dengpan
    SECURITY AND COMMUNICATION NETWORKS, 2022, 2022
  • [8] DiffDefense: Defending Against Adversarial Attacks via Diffusion Models
    Silva, Hondamunige Prasanna
    Seidenari, Lorenzo
    Del Bimbo, Alberto
    IMAGE ANALYSIS AND PROCESSING, ICIAP 2023, PT II, 2023, 14234 : 430 - 442
  • [9] Defending against Adversarial Attacks in Federated Learning on Metric Learning Model
    Gu, Zhipin
    Shi, Jiangyong
    Yang, Yuexiang
    He, Liangzhong
    2023 IEEE 22ND INTERNATIONAL CONFERENCE ON TRUST, SECURITY AND PRIVACY IN COMPUTING AND COMMUNICATIONS, TRUSTCOM, BIGDATASE, CSE, EUC, ISCI 2023, 2024, : 197 - 206
  • [10] Defending non-Bayesian learning against adversarial attacks
    Lili Su
    Nitin H. Vaidya
    Distributed Computing, 2019, 32 : 277 - 289