TENSORSHIELD: Tensor-based Defense Against Adversarial Attacks on Images

被引:0
|
作者
Entezari, Negin [1 ]
Papalexakis, Evangelos E. [1 ]
机构
[1] Univ Calif Riverside, Riverside, CA 92521 USA
关键词
adversarial machine learning; deep neural networks; image classification;
D O I
10.1109/MILCOM55135.2022.10017763
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Recent studies have demonstrated that machine learning approaches like deep neural networks (DNNs) are easily fooled by adversarial attacks. Subtle and imperceptible perturbations of the data are able to change the result of deep neural networks. Leveraging vulnerable machine learning methods raises many concerns, especially in domains where security is an important factor. Therefore, it is crucial to design defense mechanisms against adversarial attacks. For the task of image classification, unnoticeable perturbations mostly occur in the high-frequency spectrum of the image. In this paper, we utilize tensor decomposition techniques as a preprocessing step to find a low-rank approximation of images that can significantly discard high-frequency perturbations. Recently a defense framework called SHIELD [1] could "vaccinate" Convolutional Neural Networks (CNN) against adversarial examples by performing random-quality JPEG compressions on local patches of images on the ImageNet dataset. Our tensor-based defense mechanism outperforms the SLQ method from SHIELD by 14% against Fast Gradient Descent (FGSM) adversarial attacks, while maintaining comparable speed.
引用
收藏
页数:6
相关论文
共 50 条
  • [31] CP tensor-based compression of hyperspectral images
    Fang, Leyuan
    He, Nanjun
    Lin, Hui
    JOURNAL OF THE OPTICAL SOCIETY OF AMERICA A-OPTICS IMAGE SCIENCE AND VISION, 2017, 34 (02) : 252 - 258
  • [32] Symmetry Defense Against CNN Adversarial Perturbation Attacks
    Lindqvist, Blerta
    INFORMATION SECURITY, ISC 2023, 2023, 14411 : 142 - 160
  • [33] Universal Inverse Perturbation Defense Against Adversarial Attacks
    Chen J.-Y.
    Wu C.-A.
    Zheng H.-B.
    Wang W.
    Wen H.
    Zidonghua Xuebao/Acta Automatica Sinica, 2023, 49 (10): : 2172 - 2187
  • [34] DEFENSE AGAINST ADVERSARIAL ATTACKS ON SPOOFING COUNTERMEASURES OF ASV
    Wu, Haibin
    Liu, Songxiang
    Meng, Helen
    Lee, Hung-yi
    2020 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING, 2020, : 6564 - 6568
  • [35] Watermarking-based Defense against Adversarial Attacks on Deep Neural Networks
    Li, Xiaoting
    Chen, Lingwei
    Zhang, Jinquan
    Larus, James
    Wu, Dinghao
    2021 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2021,
  • [36] Instance-based defense against adversarial attacks in Deep Reinforcement Learning
    Garcia, Javier
    Sagredo, Ismael
    ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2022, 107
  • [37] Transformer Based Defense GAN Against Palm-Vein Adversarial Attacks
    Li, Yantao
    Ruan, Song
    Qin, Huafeng
    Deng, Shaojiang
    El-Yacoubi, Mounim A.
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2023, 18 : 1509 - 1523
  • [38] Reconstructing images with attention generative adversarial network against adversarial attacks
    Shen, Xiong
    Lu, Yiqin
    Cheng, Zhe
    Mao, Zhongshu
    Yang, Zhang
    Qin, Jiancheng
    JOURNAL OF ELECTRONIC IMAGING, 2024, 33 (03) : 33029
  • [39] TENSOR-BASED DEEPFAKE DETECTION IN SCALED AND COMPRESSED IMAGES
    Concas, Sara
    Perelli, Gianpaolo
    Marcialis, Gian Luca
    Puglisi, Giovanni
    2022 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, ICIP, 2022, : 3121 - 3125
  • [40] ENSEMBLE ADVERSARIAL TRAINING BASED DEFENSE AGAINST ADVERSARIAL ATTACKS FOR MACHINE LEARNING-BASED INTRUSION DETECTION SYSTEM
    Haroon, M. S.
    Ali, H. M.
    NEURAL NETWORK WORLD, 2023, 33 (05) : 317 - 336