TENSORSHIELD: Tensor-based Defense Against Adversarial Attacks on Images

被引:0
|
作者
Entezari, Negin [1 ]
Papalexakis, Evangelos E. [1 ]
机构
[1] Univ Calif Riverside, Riverside, CA 92521 USA
关键词
adversarial machine learning; deep neural networks; image classification;
D O I
10.1109/MILCOM55135.2022.10017763
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Recent studies have demonstrated that machine learning approaches like deep neural networks (DNNs) are easily fooled by adversarial attacks. Subtle and imperceptible perturbations of the data are able to change the result of deep neural networks. Leveraging vulnerable machine learning methods raises many concerns, especially in domains where security is an important factor. Therefore, it is crucial to design defense mechanisms against adversarial attacks. For the task of image classification, unnoticeable perturbations mostly occur in the high-frequency spectrum of the image. In this paper, we utilize tensor decomposition techniques as a preprocessing step to find a low-rank approximation of images that can significantly discard high-frequency perturbations. Recently a defense framework called SHIELD [1] could "vaccinate" Convolutional Neural Networks (CNN) against adversarial examples by performing random-quality JPEG compressions on local patches of images on the ImageNet dataset. Our tensor-based defense mechanism outperforms the SLQ method from SHIELD by 14% against Fast Gradient Descent (FGSM) adversarial attacks, while maintaining comparable speed.
引用
收藏
页数:6
相关论文
共 50 条
  • [1] Defense Against Adversarial Attacks by Reconstructing Images
    Zhang, Shudong
    Gao, Haichang
    Rao, Qingxun
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2021, 30 : 6117 - 6129
  • [2] Defense Mechanism Against Adversarial Attacks Using Density-based Representation of Images
    Huang, Yen-Ting
    Liao, Wen-Hung
    Huang, Chen-Wei
    2020 25TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2021, : 3499 - 3504
  • [3] Defense Against Adversarial Attacks Based on Stochastic Descent Sign Activation Networks on Medical Images
    Yang, Yanan
    Shih, Frank Y.
    Roshan, Usman
    INTERNATIONAL JOURNAL OF PATTERN RECOGNITION AND ARTIFICIAL INTELLIGENCE, 2022, 36 (03)
  • [4] Defense against adversarial attacks in traffic sign images identification based on 5G
    Fei Wu
    Limin Xiao
    Wenxue Yang
    Jinbin Zhu
    EURASIP Journal on Wireless Communications and Networking, 2020
  • [5] Deblurring as a Defense against Adversarial Attacks
    Duckworth, William, III
    Liao, Weixian
    Yu, Wei
    2023 IEEE 12TH INTERNATIONAL CONFERENCE ON CLOUD NETWORKING, CLOUDNET, 2023, : 61 - 67
  • [6] Defense against adversarial attacks in traffic sign images identification based on 5G
    Wu, Fei
    Xiao, Limin
    Yang, Wenxue
    Zhu, Jinbin
    EURASIP JOURNAL ON WIRELESS COMMUNICATIONS AND NETWORKING, 2020, 2020 (01)
  • [7] Text Adversarial Purification as Defense against Adversarial Attacks
    Li, Linyang
    Song, Demin
    Qiu, Xipeng
    PROCEEDINGS OF THE 61ST ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, ACL 2023, VOL 1, 2023, : 338 - 350
  • [8] Defense against adversarial attacks based on color space transformation
    Wang, Haoyu
    Wu, Chunhua
    Zheng, Kangfeng
    NEURAL NETWORKS, 2024, 173
  • [9] AdvRefactor: A Resampling-Based Defense Against Adversarial Attacks
    Jiang, Jianguo
    Li, Boquan
    Yu, Min
    Liu, Chao
    Sun, Jianguo
    Huang, Weiqing
    Lv, Zhiqiang
    ADVANCES IN MULTIMEDIA INFORMATION PROCESSING - PCM 2018, PT II, 2018, 11165 : 815 - 825
  • [10] Defense against Adversarial Attacks with an Induced Class
    Xu, Zhi
    Wang, Jun
    Pu, Jian
    2021 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2021,