TENSORSHIELD: Tensor-based Defense Against Adversarial Attacks on Images

被引:0
作者
Entezari, Negin [1 ]
Papalexakis, Evangelos E. [1 ]
机构
[1] Univ Calif Riverside, Riverside, CA 92521 USA
来源
2022 IEEE MILITARY COMMUNICATIONS CONFERENCE (MILCOM) | 2022年
关键词
adversarial machine learning; deep neural networks; image classification;
D O I
10.1109/MILCOM55135.2022.10017763
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Recent studies have demonstrated that machine learning approaches like deep neural networks (DNNs) are easily fooled by adversarial attacks. Subtle and imperceptible perturbations of the data are able to change the result of deep neural networks. Leveraging vulnerable machine learning methods raises many concerns, especially in domains where security is an important factor. Therefore, it is crucial to design defense mechanisms against adversarial attacks. For the task of image classification, unnoticeable perturbations mostly occur in the high-frequency spectrum of the image. In this paper, we utilize tensor decomposition techniques as a preprocessing step to find a low-rank approximation of images that can significantly discard high-frequency perturbations. Recently a defense framework called SHIELD [1] could "vaccinate" Convolutional Neural Networks (CNN) against adversarial examples by performing random-quality JPEG compressions on local patches of images on the ImageNet dataset. Our tensor-based defense mechanism outperforms the SLQ method from SHIELD by 14% against Fast Gradient Descent (FGSM) adversarial attacks, while maintaining comparable speed.
引用
收藏
页数:6
相关论文
共 50 条
  • [31] 2N labeling defense method against adversarial attacks by filtering and extended class label set
    Gábor Szűcs
    Richárd Kiss
    Multimedia Tools and Applications, 2023, 82 : 16717 - 16740
  • [32] Tensor-based sparse representations of multi-phase medical images for classification of focal liver lesions
    Wang, Jian
    Li, Jing
    Han, Xian-Hua
    Lin, Lanfen
    Hu, Hongjie
    Xu, Yingying
    Chen, Qingqing
    Iwamoto, Yutaro
    Chen, Yen-Wei
    PATTERN RECOGNITION LETTERS, 2020, 130 (130) : 207 - 215
  • [33] Robust Adversarial Attacks Against DNN-Based Wireless Communication Systems
    Bahramali, Alireza
    Nasr, Milad
    Houmansadr, Amir
    Goeckel, Dennis
    Towsley, Don
    CCS '21: PROCEEDINGS OF THE 2021 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, 2021, : 126 - 140
  • [34] Adversarial Attacks Against Machine Learning-Based Resource Provisioning Systems
    Nazari, Najmeh
    Makrani, Hosein Mohammadi
    Fang, Chongzhou
    Omidi, Behnam
    Rafatirad, Setareh
    Sayadi, Hossein
    Khasawneh, Khaled N.
    Homayoun, Houman
    IEEE MICRO, 2023, 43 (05) : 35 - 44
  • [35] On the Effectiveness of Adversarial Training in Defending against Adversarial Example Attacks for Image Classification
    Park, Sanglee
    So, Jungmin
    APPLIED SCIENCES-BASEL, 2020, 10 (22): : 1 - 16
  • [37] Adversarial Machine Learning Attacks and Defense Methods in the Cyber Security Domain
    Rosenberg, Ishai
    Shabtai, Asaf
    Elovici, Yuval
    Rokach, Lior
    ACM COMPUTING SURVEYS, 2021, 54 (05)
  • [38] DEFENDING GRAPH CONVOLUTIONAL NETWORKS AGAINST ADVERSARIAL ATTACKS
    Ioannidis, Vassilis N.
    Giannakis, Georgios B.
    2020 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING, 2020, : 8469 - 8473
  • [39] Fuzzy classification boundaries against adversarial network attacks
    Iglesias, Felix
    Milosevic, Jelena
    Zseby, Tanja
    FUZZY SETS AND SYSTEMS, 2019, 368 : 20 - 35
  • [40] Combinatorial Boosting of Ensembles of Diversified Classifiers for Defense Against Evasion Attacks
    Izmailov, Rauf
    Lin, Peter
    Venkatesan, Sridhar
    Sugrim, Shridatt
    2021 IEEE MILITARY COMMUNICATIONS CONFERENCE (MILCOM 2021), 2021,