Attacking Compressed Vision Transformers

被引:0
|
作者
Parekh, Swapnil [1 ]
Shukla, Pratyush [1 ]
Shah, Devansh [1 ]
机构
[1] NYU, New York, NY 10012 USA
关键词
Adversarial attacks; Vision transformers; Model compression; Universal adversarial perturbations; Quantization; Pruning; Knowledge distillation;
D O I
10.1007/978-3-031-28073-3_50
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Vision Transformers are increasingly embedded in industrial systems due to their superior performance, but their memory and power requirements make deploying them to edge devices a challenging task. Hence, model compression techniques are now widely used to deploy models on edge devices as they decrease the resource requirements and make model inference very fast and efficient. But their reliability and robustness from a security perspective are major issues in safety-critical applications. Adversarial attacks are like optical illusions for ML algorithms and they can severely impact the accuracy and reliability of models. In this work, we investigate the performance of adversarial attacks across the Vision Transformer model compressed using 3 SOTA compression techniques. We also analyze the effect different compression techniques like Quantization, Pruning, and Weight Multiplexing have on the transferability of adversarial attacks.
引用
收藏
页码:743 / 758
页数:16
相关论文
共 50 条
  • [1] BiViT: Extremely Compressed Binary Vision Transformers
    He, Yefei
    Lou, Zhenyu
    Zhang, Luoming
    Liu, Jing
    Wu, Weijia
    Zhou, Hong
    Zhuang, Bohan
    2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION, ICCV, 2023, : 5628 - 5640
  • [2] Decepticon: Attacking Secrets of Transformers
    Al Rafi, Mujahid
    Feng, Yuan
    Yao, Fan
    Tang, Meng
    Jeon, Hyeran
    2023 IEEE INTERNATIONAL SYMPOSIUM ON WORKLOAD CHARACTERIZATION, IISWC, 2023, : 128 - 139
  • [3] SALSA: Attacking Lattice Cryptography with Transformers
    Wenger, Emily
    Chen, Mingjie
    Charton, Francois
    Lauter, Kristin
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,
  • [4] Quantum Vision Transformers
    Cherrat, El Amine
    Kerenidis, Iordanis
    Mathur, Natansh
    Landman, Jonas
    Strahm, Martin
    Li, Yun Yvonna
    QUANTUM, 2024, 8 : 1 - 20
  • [5] Scaling Vision Transformers
    Zhai, Xiaohua
    Kolesnikov, Alexander
    Houlsby, Neil
    Beyer, Lucas
    2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2022, : 12094 - 12103
  • [6] Transformers in Vision: A Survey
    Khan, Salman
    Naseer, Muzammal
    Hayat, Munawar
    Zamir, Syed Waqas
    Khan, Fahad Shahbaz
    Shah, Mubarak
    ACM COMPUTING SURVEYS, 2022, 54 (10S)
  • [7] Reversible Vision Transformers
    Mangalam, Karttikeya
    Fan, Haoqi
    Li, Yanghao
    Wu, Chao-Yuan
    Xiong, Bo
    Feichtenhofer, Christoph
    Malik, Jitendra
    2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2022, : 10820 - 10830
  • [8] Denoising Vision Transformers
    Yang, Jiawei
    Luo, Katie Z.
    Li, Jiefeng
    Deng, Congyue
    Guibas, Leonidas
    Krishnan, Dilip
    Weinberger, Kilian Q.
    Tian, Yonglong
    Wang, Yue
    COMPUTER VISION - ECCV 2024, PT LXXXV, 2025, 15143 : 453 - 469
  • [9] Multiscale Vision Transformers
    Fan, Haoqi
    Xiong, Bo
    Mangalam, Karttikeya
    Li, Yanghao
    Yan, Zhicheng
    Malik, Jitendra
    Feichtenhofer, Christoph
    2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, : 6804 - 6815
  • [10] Eventful Transformers: Leveraging Temporal Redundancy in Vision Transformers
    Dutson, Matthew
    Li, Yin
    Gupta, Mohit
    2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2023), 2023, : 16865 - 16877