TENSORSHIELD: Tensor-based Defense Against Adversarial Attacks on Images

被引:0
作者
Entezari, Negin [1 ]
Papalexakis, Evangelos E. [1 ]
机构
[1] Univ Calif Riverside, Riverside, CA 92521 USA
来源
2022 IEEE MILITARY COMMUNICATIONS CONFERENCE (MILCOM) | 2022年
关键词
adversarial machine learning; deep neural networks; image classification;
D O I
10.1109/MILCOM55135.2022.10017763
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Recent studies have demonstrated that machine learning approaches like deep neural networks (DNNs) are easily fooled by adversarial attacks. Subtle and imperceptible perturbations of the data are able to change the result of deep neural networks. Leveraging vulnerable machine learning methods raises many concerns, especially in domains where security is an important factor. Therefore, it is crucial to design defense mechanisms against adversarial attacks. For the task of image classification, unnoticeable perturbations mostly occur in the high-frequency spectrum of the image. In this paper, we utilize tensor decomposition techniques as a preprocessing step to find a low-rank approximation of images that can significantly discard high-frequency perturbations. Recently a defense framework called SHIELD [1] could "vaccinate" Convolutional Neural Networks (CNN) against adversarial examples by performing random-quality JPEG compressions on local patches of images on the ImageNet dataset. Our tensor-based defense mechanism outperforms the SLQ method from SHIELD by 14% against Fast Gradient Descent (FGSM) adversarial attacks, while maintaining comparable speed.
引用
收藏
页数:6
相关论文
共 50 条
  • [1] AdvRefactor: A Resampling-Based Defense Against Adversarial Attacks
    Jiang, Jianguo
    Li, Boquan
    Yu, Min
    Liu, Chao
    Sun, Jianguo
    Huang, Weiqing
    Lv, Zhiqiang
    ADVANCES IN MULTIMEDIA INFORMATION PROCESSING - PCM 2018, PT II, 2018, 11165 : 815 - 825
  • [2] Watermarking-based Defense against Adversarial Attacks on Deep Neural Networks
    Li, Xiaoting
    Chen, Lingwei
    Zhang, Jinquan
    Larus, James
    Wu, Dinghao
    2021 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2021,
  • [3] Reconstructing images with attention generative adversarial network against adversarial attacks
    Shen, Xiong
    Lu, Yiqin
    Cheng, Zhe
    Mao, Zhongshu
    Yang, Zhang
    Qin, Jiancheng
    JOURNAL OF ELECTRONIC IMAGING, 2024, 33 (03) : 33029
  • [4] A NEURO-INSPIRED AUTOENCODING DEFENSE AGAINST ADVERSARIAL ATTACKS
    Bakiskan, Can
    Cekic, Metehan
    Sezer, Ahmet Dundar
    Madhow, Upamanyu
    2021 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2021, : 3922 - 3926
  • [5] Boundary Defense Against Black-box Adversarial Attacks
    Aithal, Manjushree B.
    Li, Xiaohua
    2022 26TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2022, : 2349 - 2356
  • [6] StratDef: Strategic defense against adversarial attacks in ML-based malware detection
    Rashid, Aqib
    Such, Jose
    COMPUTERS & SECURITY, 2023, 134
  • [7] Adversarial Defense on Harmony: Reverse Attack for Robust AI Models Against Adversarial Attacks
    Kim, Yebon
    Jung, Jinhyo
    Kim, Hyunjun
    So, Hwisoo
    Ko, Yohan
    Shrivastava, Aviral
    Lee, Kyoungwoo
    Hwang, Uiwon
    IEEE ACCESS, 2024, 12 : 176485 - 176497
  • [8] A Network Security Classifier Defense: Against Adversarial Machine Learning Attacks
    De Lucia, Michael J.
    Cotton, Chase
    PROCEEDINGS OF THE 2ND ACM WORKSHOP ON WIRELESS SECURITY AND MACHINE LEARNING, WISEML 2020, 2020, : 67 - 73
  • [9] MalProtect: Stateful Defense Against Adversarial Query Attacks in ML-Based Malware Detection
    Rashid, Aqib
    Such, Jose
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2023, 18 : 4361 - 4376
  • [10] Defense against adversarial attacks by low-level image transformations
    Yin, Zhaoxia
    Wang, Hua
    Wang, Jie
    Tang, Jin
    Wang, Wenzhong
    INTERNATIONAL JOURNAL OF INTELLIGENT SYSTEMS, 2020, 35 (10) : 1453 - 1466