TENSORSHIELD: Tensor-based Defense Against Adversarial Attacks on Images

被引:0
作者
Entezari, Negin [1 ]
Papalexakis, Evangelos E. [1 ]
机构
[1] Univ Calif Riverside, Riverside, CA 92521 USA
来源
2022 IEEE MILITARY COMMUNICATIONS CONFERENCE (MILCOM) | 2022年
关键词
adversarial machine learning; deep neural networks; image classification;
D O I
10.1109/MILCOM55135.2022.10017763
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Recent studies have demonstrated that machine learning approaches like deep neural networks (DNNs) are easily fooled by adversarial attacks. Subtle and imperceptible perturbations of the data are able to change the result of deep neural networks. Leveraging vulnerable machine learning methods raises many concerns, especially in domains where security is an important factor. Therefore, it is crucial to design defense mechanisms against adversarial attacks. For the task of image classification, unnoticeable perturbations mostly occur in the high-frequency spectrum of the image. In this paper, we utilize tensor decomposition techniques as a preprocessing step to find a low-rank approximation of images that can significantly discard high-frequency perturbations. Recently a defense framework called SHIELD [1] could "vaccinate" Convolutional Neural Networks (CNN) against adversarial examples by performing random-quality JPEG compressions on local patches of images on the ImageNet dataset. Our tensor-based defense mechanism outperforms the SLQ method from SHIELD by 14% against Fast Gradient Descent (FGSM) adversarial attacks, while maintaining comparable speed.
引用
收藏
页数:6
相关论文
共 35 条
  • [21] Luo Y, 2018, Arxiv, DOI arXiv:1812.02891
  • [22] Madry A, 2019, Arxiv, DOI arXiv:1706.06083
  • [23] TENSOR-TRAIN DECOMPOSITION
    Oseledets, I. V.
    [J]. SIAM JOURNAL ON SCIENTIFIC COMPUTING, 2011, 33 (05) : 2295 - 2317
  • [24] Kingma DP, 2014, Arxiv, DOI arXiv:1312.6114
  • [25] Tensors for Data Mining and Data Fusion: Models, Applications, and Scalable Algorithms
    Papalexakis, Evangelos E.
    Faloutsos, Christos
    Sidiropoulos, Nicholas D.
    [J]. ACM TRANSACTIONS ON INTELLIGENT SYSTEMS AND TECHNOLOGY, 2017, 8 (02)
  • [26] Papernot N, 2018, Arxiv, DOI arXiv:1610.00768
  • [27] Distillation as a Defense to Adversarial Perturbations against Deep Neural Networks
    Papernot, Nicolas
    McDaniel, Patrick
    Wu, Xi
    Jha, Somesh
    Swami, Ananthram
    [J]. 2016 IEEE SYMPOSIUM ON SECURITY AND PRIVACY (SP), 2016, : 582 - 597
  • [28] The Limitations of Deep Learning in Adversarial Settings
    Papernot, Nicolas
    McDaniel, Patrick
    Jha, Somesh
    Fredrikson, Matt
    Celik, Z. Berkay
    Swami, Ananthram
    [J]. 1ST IEEE EUROPEAN SYMPOSIUM ON SECURITY AND PRIVACY, 2016, : 372 - 387
  • [29] Simonyan K, 2015, Arxiv, DOI arXiv:1409.1556
  • [30] Szegedy C, 2014, Arxiv, DOI [arXiv:1312.6199, 10.48550/arXiv.1312.6199]