Deblurring as a Defense against Adversarial Attacks

被引:0
|
作者
Duckworth, William, III [1 ]
Liao, Weixian [1 ]
Yu, Wei [1 ]
机构
[1] Towson Univ, Dept Comp & Informat Sci, Towson, MD 21252 USA
来源
2023 IEEE 12TH INTERNATIONAL CONFERENCE ON CLOUD NETWORKING, CLOUDNET | 2023年
关键词
Machine Learning; adversarial defense; deblurring; cloud security;
D O I
10.1109/CloudNet59005.2023.10490049
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
The increased use of image classification models and the prevalence of autonomous vehicles on roads has sparked a conversation regarding protecting these tools from adversarial attacks. Much of the existing research focuses on investigating potential methods of detecting and removing adversarial gradients during the model training phase, which can negatively affect the accuracy of learning models. However, how to best protect the learning model once it has been trained and is in use becomes an emerging yet significant problem that has yet to be touched upon. In this paper, we investigate the potential use of Artificial Intelligence (AI) deblurring as a defense strategy to protect against adversarial gradients, which is model and attack-independent. Specifically, we propose a multi-part input transformation-based adversarial defense that utilizes blurring, contrast adjustment, and AI deblurring to remove adversarial gradients from images without significantly increasing classification time. By using AI deblurring in conjunction with blurring and contrast adjustment, we can mitigate the amount of feature data lost due to using higher standard deviations for our blur kernel. Our approach results in better denoising than what would have been possible by simply using blurring, leading to improved classification accuracy.
引用
收藏
页码:61 / 67
页数:7
相关论文
共 50 条
  • [1] Text Adversarial Purification as Defense against Adversarial Attacks
    Li, Linyang
    Song, Demin
    Qiu, Xipeng
    PROCEEDINGS OF THE 61ST ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, ACL 2023, VOL 1, 2023, : 338 - 350
  • [2] Defense against Adversarial Attacks with an Induced Class
    Xu, Zhi
    Wang, Jun
    Pu, Jian
    2021 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2021,
  • [3] On the Defense of Spoofing Countermeasures Against Adversarial Attacks
    Nguyen-Vu, Long
    Doan, Thien-Phuc
    Bui, Mai
    Hong, Kihun
    Jung, Souhwan
    IEEE ACCESS, 2023, 11 : 94563 - 94574
  • [4] A Defense Method Against Facial Adversarial Attacks
    Sadu, Chiranjeevi
    Das, Pradip K.
    2021 IEEE REGION 10 CONFERENCE (TENCON 2021), 2021, : 459 - 463
  • [5] Binary thresholding defense against adversarial attacks
    Wang, Yutong
    Zhang, Wenwen
    Shen, Tianyu
    Yu, Hui
    Wang, Fei-Yue
    NEUROCOMPUTING, 2021, 445 : 61 - 71
  • [6] Defense against adversarial attacks using DRAGAN
    ArjomandBigdeli, Ali
    Amirmazlaghani, Maryam
    Khalooei, Mohammad
    2020 6TH IRANIAN CONFERENCE ON SIGNAL PROCESSING AND INTELLIGENT SYSTEMS (ICSPIS), 2020,
  • [7] Defense Against Adversarial Attacks in Deep Learning
    Li, Yuancheng
    Wang, Yimeng
    APPLIED SCIENCES-BASEL, 2019, 9 (01):
  • [8] Optimal Transport as a Defense Against Adversarial Attacks
    Bouniot, Quentin
    Audigier, Romaric
    Loesch, Angelique
    2020 25TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2021, : 5044 - 5051
  • [9] Defense Against Adversarial Attacks by Reconstructing Images
    Zhang, Shudong
    Gao, Haichang
    Rao, Qingxun
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2021, 30 : 6117 - 6129
  • [10] The Best Defense is a Good Offense: Adversarial Augmentation against Adversarial Attacks
    Frosio, Iuri
    Kautz, Jan
    2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR, 2023, : 4067 - 4076