Robustness and Security in Deep Learning: Adversarial Attacks and Countermeasures

被引:0
作者
Kaur, Navjot [1 ]
Singh, Someet [1 ]
Deore, Shailesh Shivaji [2 ]
Vidhate, Deepak A. [3 ]
Haridas, Divya [4 ]
Kosuri, Gopala Varma [5 ]
Kolhe, Mohini Ravindra [6 ]
机构
[1] Lovely Profess Univ, Bengaluru, India
[2] SSVPS Bapusaheb Shivajirao Deore Coll Engn, Dept Comp Engn, Dhule, Maharashtra, India
[3] Dr Vithalrao Vikhe Patil Coll Engn Vilad Ghat, Dept Informat Technol, Ahmednagar, Maharashtra, India
[4] Saveetha Inst Med & Tech Sci SIMTS, Saveetha Sch Engn, Dept Condensed Matter Phys, Chennai 602105, Tamil Nadu, India
[5] SRKR Engn Coll, CSE, Bhimavaram, India
[6] Dr DY Patil Inst Technol, Pune, India
关键词
Deep Learning; Adversarial Attacks; Robustness; Defense Mechanisms; Adversarial Training; Input Preprocessing;
D O I
暂无
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Deep learning model shave demonstrated remarkable performance across various domains, yet their susceptibility to adversarial attacks remains a significant concern. In this study, we investigate the effectiveness of three defense mechanisms-Baseline (No Defense), Adversarial Training, and Input Preprocessing-in enhancing the robustness of deep learning models against adversarial attacks. The baseline model serves as a reference point, highlighting the vulnerability of deep learning systems to adversarial perturbations. Adversarial Training, involving the augmentation of training data with adversarial examples, significantly improves model resilience, demonstrating higher accuracy under both Fast Gradient Sign Method (FGSM) and Iterative Gradient Sign Method (IGSM) attacks. Similarly, Input Preprocessing techniques mitigate the impact of adversarial perturbations on model predictions by modifying input data before inference. However, each defense mechanism presents trade-offs in terms of computational complexity and performance. Adversarial Training requires additional computational resources and longer training times, while Input Preprocessing techniques may introduce distortions affecting model generalization. Future research directions may focus on developing more sophisticated defense mechanisms, including ensemble methods, gradient masking, and certified defense strategies, to provide robust and reliable deep learning systems in real-world scenarios. This study contributes to a deeper understanding of defense mechanisms against adversarial attacks in deep learning, highlighting the importance of implementing robust strategies to enhance model resilience.
引用
收藏
页码:1250 / 1257
页数:8
相关论文
共 50 条
  • [1] Adversarial Robustness in Deep Learning: Attacks on Fragile Neurons
    Pravin, Chandresh
    Martino, Ivan
    Nicosia, Giuseppe
    Ojha, Varun
    ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING - ICANN 2021, PT I, 2021, 12891 : 16 - 28
  • [2] Data Security Issues in Deep Learning: Attacks, Countermeasures, and Opportunities
    Xu, Guowen
    Li, Hongwei
    Ren, Hao
    Yang, Kan
    Deng, Robert H.
    IEEE COMMUNICATIONS MAGAZINE, 2019, 57 (11) : 116 - 122
  • [3] Adversarial Robustness of Deep Learning: Theory, Algorithms, and Applications
    Ruan, Wenjie
    Yi, Xinping
    Huang, Xiaowei
    PROCEEDINGS OF THE 30TH ACM INTERNATIONAL CONFERENCE ON INFORMATION & KNOWLEDGE MANAGEMENT, CIKM 2021, 2021, : 4866 - 4869
  • [4] Adversarial robustness and attacks for multi-view deep models
    Sun, Xuli
    Sun, Shiliang
    ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2021, 97 (97)
  • [5] Adversarial Attacks on Deep Learning Models of Computer Vision: A Survey
    Ding, Jia
    Xu, Zhiwu
    ALGORITHMS AND ARCHITECTURES FOR PARALLEL PROCESSING, ICA3PP 2020, PT III, 2020, 12454 : 396 - 408
  • [6] Defense Against Adversarial Attacks in Deep Learning
    Li, Yuancheng
    Wang, Yimeng
    APPLIED SCIENCES-BASEL, 2019, 9 (01):
  • [7] A novel method for improving the robustness of deep learning-based malware detectors against adversarial attacks
    Shaukat, Kamran
    Luo, Suhuai
    Varadharajan, Vijay
    ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2022, 116
  • [8] Robustness of Deep Learning-Based Specific Emitter Identification under Adversarial Attacks
    Sun, Liting
    Ke, Da
    Wang, Xiang
    Huang, Zhitao
    Huang, Kaizhu
    REMOTE SENSING, 2022, 14 (19)
  • [9] Adversarial Robustness in Deep Learning: From Practices to Theories
    Xu, Han
    Li, Yaxin
    Liu, Xiaorui
    Wang, Wentao
    Tang, Jiliang
    KDD '21: PROCEEDINGS OF THE 27TH ACM SIGKDD CONFERENCE ON KNOWLEDGE DISCOVERY & DATA MINING, 2021, : 4086 - 4087
  • [10] Analyzing the Robustness of Deep Learning Against Adversarial Examples
    Zhao, Jun
    2018 56TH ANNUAL ALLERTON CONFERENCE ON COMMUNICATION, CONTROL, AND COMPUTING (ALLERTON), 2018, : 1060 - 1064