Provable Repair of Vision Transformers

被引:0
|
作者
Nawas, Stephanie [1 ]
Tao, Zhe [1 ]
Thakur, Aditya, V [1 ]
机构
[1] Univ Calif Davis, Davis, CA 95616 USA
来源
AI VERIFICATION, SAIV 2024 | 2024年 / 14846卷
关键词
D O I
10.1007/978-3-031-65112-0_8
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Vision Transformers have emerged as state-of-the-art image recognition tools, but may still exhibit incorrect behavior. Incorrect image recognition can have disastrous consequences in safety-critical real-world applications such as self-driving automobiles. In this paper, we present Provable Repair of Vision Transformers (PRoViT), a provable repair approach that guarantees the correct classification of images in a repair set for a given Vision Transformer without modifying its architecture. PRoViT avoids negatively affecting correctly classified images (drawdown) by minimizing the changes made to the Vision Transformer's parameters and original output. We observe that for Vision Transformers, unlike for other architectures such as ResNet or VGG, editing just the parameters in the last layer achieves correctness guarantees and very low drawdown. We introduce a novel method for editing these last-layer parameters that enables PRoViT to efficiently repair state-of-the-art Vision Transformers for thousands of images, far exceeding the capabilities of prior provable repair approaches.
引用
收藏
页码:156 / 178
页数:23
相关论文
共 50 条
  • [1] Provable Repair of Deep Neural Networks
    Sotoudeh, Matthew
    Thakur, Aditya, V
    PROCEEDINGS OF THE 42ND ACM SIGPLAN INTERNATIONAL CONFERENCE ON PROGRAMMING LANGUAGE DESIGN AND IMPLEMENTATION (PLDI '21), 2021, : 588 - 603
  • [2] Quantum Vision Transformers
    Cherrat, El Amine
    Kerenidis, Iordanis
    Mathur, Natansh
    Landman, Jonas
    Strahm, Martin
    Li, Yun Yvonna
    QUANTUM, 2024, 8 : 1 - 20
  • [3] Scaling Vision Transformers
    Zhai, Xiaohua
    Kolesnikov, Alexander
    Houlsby, Neil
    Beyer, Lucas
    2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2022, : 12094 - 12103
  • [4] Transformers in Vision: A Survey
    Khan, Salman
    Naseer, Muzammal
    Hayat, Munawar
    Zamir, Syed Waqas
    Khan, Fahad Shahbaz
    Shah, Mubarak
    ACM COMPUTING SURVEYS, 2022, 54 (10S)
  • [5] Reversible Vision Transformers
    Mangalam, Karttikeya
    Fan, Haoqi
    Li, Yanghao
    Wu, Chao-Yuan
    Xiong, Bo
    Feichtenhofer, Christoph
    Malik, Jitendra
    2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2022, : 10820 - 10830
  • [6] Denoising Vision Transformers
    Yang, Jiawei
    Luo, Katie Z.
    Li, Jiefeng
    Deng, Congyue
    Guibas, Leonidas
    Krishnan, Dilip
    Weinberger, Kilian Q.
    Tian, Yonglong
    Wang, Yue
    COMPUTER VISION - ECCV 2024, PT LXXXV, 2025, 15143 : 453 - 469
  • [7] Multiscale Vision Transformers
    Fan, Haoqi
    Xiong, Bo
    Mangalam, Karttikeya
    Li, Yanghao
    Yan, Zhicheng
    Malik, Jitendra
    Feichtenhofer, Christoph
    2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, : 6804 - 6815
  • [8] Eventful Transformers: Leveraging Temporal Redundancy in Vision Transformers
    Dutson, Matthew
    Li, Yin
    Gupta, Mohit
    2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2023), 2023, : 16865 - 16877
  • [9] Relational Reasoning via Set Transformers: Provable Efficiency and Applications to MARL
    Zhang, Fengzhuo
    Liu, Boyi
    Wang, Kaixin
    Tan, Vincent Y. F.
    Yang, Zhuoran
    Wang, Zhaoran
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,
  • [10] Intriguing Properties of Vision Transformers
    Naseer, Muzammal
    Ranasinghe, Kanchana
    Khan, Salman
    Hayat, Munawar
    Khan, Fahad Shahbaz
    Yang, Ming-Hsuan
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34