TENSORSHIELD: Tensor-based Defense Against Adversarial Attacks on Images

被引:0
作者
Entezari, Negin [1 ]
Papalexakis, Evangelos E. [1 ]
机构
[1] Univ Calif Riverside, Riverside, CA 92521 USA
来源
2022 IEEE MILITARY COMMUNICATIONS CONFERENCE (MILCOM) | 2022年
关键词
adversarial machine learning; deep neural networks; image classification;
D O I
10.1109/MILCOM55135.2022.10017763
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Recent studies have demonstrated that machine learning approaches like deep neural networks (DNNs) are easily fooled by adversarial attacks. Subtle and imperceptible perturbations of the data are able to change the result of deep neural networks. Leveraging vulnerable machine learning methods raises many concerns, especially in domains where security is an important factor. Therefore, it is crucial to design defense mechanisms against adversarial attacks. For the task of image classification, unnoticeable perturbations mostly occur in the high-frequency spectrum of the image. In this paper, we utilize tensor decomposition techniques as a preprocessing step to find a low-rank approximation of images that can significantly discard high-frequency perturbations. Recently a defense framework called SHIELD [1] could "vaccinate" Convolutional Neural Networks (CNN) against adversarial examples by performing random-quality JPEG compressions on local patches of images on the ImageNet dataset. Our tensor-based defense mechanism outperforms the SLQ method from SHIELD by 14% against Fast Gradient Descent (FGSM) adversarial attacks, while maintaining comparable speed.
引用
收藏
页数:6
相关论文
共 50 条
  • [21] Apollon: A robust defense system against Adversarial Machine Learning attacks in Intrusion Detection Systems
    Paya, Antonio
    Arroni, Sergio
    Garcia-Diaz, Vicente
    Gomez, Alberto
    COMPUTERS & SECURITY, 2024, 136
  • [22] Adversarial Deep Learning: A Survey on Adversarial Attacks and Defense Mechanisms on Image Classification
    Khamaiseh, Samer Y.
    Bagagem, Derek
    Al-Alaj, Abdullah
    Mancino, Mathew
    Alomari, Hakam W.
    IEEE ACCESS, 2022, 10 : 102266 - 102291
  • [23] BDDR: An Effective Defense Against Textual Backdoor Attacks
    Shao, Kun
    Yang, Junan
    Ai, Yang
    Liu, Hui
    Zhang, Yu
    COMPUTERS & SECURITY, 2021, 110
  • [24] Adversarial Training Against Adversarial Attacks for Machine Learning-Based Intrusion Detection Systems
    Haroon, Muhammad Shahzad
    Ali, Husnain Mansoor
    CMC-COMPUTERS MATERIALS & CONTINUA, 2022, 73 (02): : 3513 - 3527
  • [25] Online Alternate Generator Against Adversarial Attacks
    Li, Haofeng
    Zeng, Yirui
    Li, Guanbin
    Lin, Liang
    Yu, Yizhou
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2020, 29 : 9305 - 9315
  • [26] ADVERSARIAL ATTACKS AGAINST AUDIO SURVEILLANCE SYSTEMS
    Ntalampiras, Stavros
    2022 30TH EUROPEAN SIGNAL PROCESSING CONFERENCE (EUSIPCO 2022), 2022, : 284 - 288
  • [27] Adversarial Attacks and Defenses in Images, Graphs and Text: A Review
    Xu, Han
    Ma, Yao
    Liu, Hao-Chen
    Deb, Debayan
    Liu, Hui
    Tang, Ji-Liang
    Jain, Anil K.
    INTERNATIONAL JOURNAL OF AUTOMATION AND COMPUTING, 2020, 17 (02) : 151 - 178
  • [28] Detect Adversarial Attacks Against Deep Neural Networks With GPU Monitoring
    Zoppi, Tommaso
    Ceccarelli, Andrea
    IEEE ACCESS, 2021, 9 : 150579 - 150591
  • [29] Hadamard's Defense Against Adversarial Examples
    Hoyos, Angello
    Ruiz, Ubaldo
    Chavez, Edgar
    IEEE ACCESS, 2021, 9 : 118324 - 118333
  • [30] 2N labeling defense method against adversarial attacks by filtering and extended class label set
    Szucs, Gabor
    Kiss, Richard
    MULTIMEDIA TOOLS AND APPLICATIONS, 2023, 82 (11) : 16717 - 16740