Adversarial Robustness of Multi-bit Convolutional Neural Networks

被引:0
|
作者
Frickenstein, Lukas [1 ]
Sampath, Shambhavi Balamuthu [1 ]
Mori, Pierpaolo [3 ]
Vemparala, Manoj-Rohit [1 ]
Fasfous, Nael [1 ]
Frickenstein, Alexander [1 ]
Unger, Christian [1 ]
Passerone, Claudio [3 ]
Stechele, Walter [2 ]
机构
[1] BMW Autonomous Driving, Unterschleissheim, Germany
[2] Tech Univ Munich, Munich, Germany
[3] Politecn Torino, Turin, Italy
来源
INTELLIGENT SYSTEMS AND APPLICATIONS, VOL 3, INTELLISYS 2023 | 2024年 / 824卷
关键词
Adversarial robustness; Neural network quantization; Multi-bit convolutional neural networks;
D O I
10.1007/978-3-031-47715-7_12
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Deploying convolutional neural networks (CNNs) on resource-constrained, embedded hardware constitutes challenges in balancing task-related accuracy and resource-efficiency. For safety-critical applications, a third optimization objective is crucial, namely the robustness of CNNs. To address these challenges, this paper investigates the tripartite optimization problem of task-related accuracy, resource-efficiency, and adversarial robustness of CNNs by utilizing multi-bit networks (MBNs). To better navigate the tripartite optimization space, this work thoroughly studies the design space of MBNs by varying the number of weight and activation bases. First, the pro-active defensive model MBN3x1 is identified, by conducting a systematic evaluation of the design space. This model achieves better adversarial accuracy (+10.3pp) against the first-order attack PGD-20 and has 1.3x lower bit-operations, with a slight degradation of natural accuracy (-2.4pp) when compared to a 2-bit fixed-point quantized implementation of ResNet-20 on CIFAR-10. Similar observations hold for deeper and wider ResNets trained on different datasets, such as CIFAR-100 and ImageNet. Second, this work shows that the defensive capability of MBNs can be increased by adopting a state-of-the-art adversarial training (AT) method. This results in an improvement of adversarial accuracy (+13.6pp) for MBN3 x 3, with a slight degradation in natural accuracy (-2.4pp) compared to the costly full-precision ResNet-56 on CIFAR-10, which has 7x more bit-operations. To the best of our knowledge, this is the first paper highlighting the improved robustness of differently configured MBNs and providing an analysis on their gradient flows.
引用
收藏
页码:157 / 174
页数:18
相关论文
共 50 条
  • [1] Sanitizing hidden activations for improving adversarial robustness of convolutional neural networks
    Mu, Tianshi
    Lin, Kequan
    Zhang, Huabing
    Wang, Jian
    JOURNAL OF INTELLIGENT & FUZZY SYSTEMS, 2021, 41 (02) : 3993 - 4003
  • [2] Improving adversarial robustness of Bayesian neural networks via multi-task adversarial training
    Chen, Xu
    Liu, Chuancai
    Zhao, Yue
    Jia, Zhiyang
    Jin, Ge
    INFORMATION SCIENCES, 2022, 592 : 156 - 173
  • [3] Adversarial Robustness Certification for Bayesian Neural Networks
    Wicker, Matthew
    Platzer, Andre
    Laurenti, Luca
    Kwiatkowska, Marta
    FORMAL METHODS, PT I, FM 2024, 2025, 14933 : 3 - 28
  • [4] On the Robustness of Bayesian Neural Networks to Adversarial Attacks
    Bortolussi, Luca
    Carbone, Ginevra
    Laurenti, Luca
    Patane, Andrea
    Sanguinetti, Guido
    Wicker, Matthew
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2024, : 1 - 14
  • [5] Adversarial robustness improvement for deep neural networks
    Charis Eleftheriadis
    Andreas Symeonidis
    Panagiotis Katsaros
    Machine Vision and Applications, 2024, 35
  • [6] Adversarial robustness improvement for deep neural networks
    Eleftheriadis, Charis
    Symeonidis, Andreas
    Katsaros, Panagiotis
    MACHINE VISION AND APPLICATIONS, 2024, 35 (03)
  • [7] A Hybrid Bayesian-Convolutional Neural Network for Adversarial Robustness
    Khong, Thi Thu Thao
    Nakada, Takashi
    Nakashima, Yasuhiko
    IEICE TRANSACTIONS ON INFORMATION AND SYSTEMS, 2022, E105D (07) : 1308 - 1319
  • [8] Uncovering Hidden Vulnerabilities in Convolutional Neural Networks through Graph-based Adversarial Robustness Evaluation
    Wang, Ke
    Chen, Zicong
    Dang, Xilin
    Fan, Xuan
    Han, Xuming
    Chen, Chien-Ming
    Ding, Weiping
    Yiu, Siu-Ming
    Weng, Jian
    PATTERN RECOGNITION, 2023, 143
  • [9] Towards Demystifying Adversarial Robustness of Binarized Neural Networks
    Qin, Zihao
    Lin, Hsiao-Ying
    Shi, Jie
    APPLIED CRYPTOGRAPHY AND NETWORK SECURITY WORKSHOPS, ACNS 2021, 2021, 12809 : 439 - 462
  • [10] An orthogonal classifier for improving the adversarial robustness of neural networks
    Xu, Cong
    Li, Xiang
    Yang, Min
    INFORMATION SCIENCES, 2022, 591 : 251 - 262