RobCaps: Evaluating the Robustness of Capsule Networks against Affine Transformations and Adversarial Attacks

被引:0
作者
Marchisio, Alberto [1 ]
De Marco, Antonio [2 ]
Colucci, Alessio [1 ]
Martina, Maurizio [2 ]
Shafique, Muhammad [3 ]
机构
[1] Vienna Univ Technol, Vienna, Austria
[2] Politecn Torino, Turin, Italy
[3] New York Univ, Abu Dhabi, U Arab Emirates
来源
2023 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, IJCNN | 2023年
关键词
Machine Learning; Deep Neural Networks; Convolutional Neural Networks; Capsule Networks; Dynamic Routing; Adversarial Attacks; Affine Transformations; Security; Robustness; Vulnerability;
D O I
10.1109/IJCNN54540.2023.10190994
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Capsule Networks (CapsNets) are able to hierarchically preserve the pose relationships between multiple objects for image classification tasks. Other than achieving high accuracy, another relevant factor in deploying CapsNets in safety-critical applications is the robustness against input transformations and malicious adversarial attacks. In this paper, we systematically analyze and evaluate different factors affecting the robustness of CapsNets, compared to traditional Convolutional Neural Networks (CNNs). Towards a comprehensive comparison, we test two CapsNet models and two CNN models on the MNIST, GTSRB, and CIFAR10 datasets, as well as on the affine-transformed versions of such datasets. With a thorough analysis, we show which properties of these architectures better contribute to increasing the robustness and their limitations. Overall, CapsNets achieve better robustness against adversarial examples and affine transformations, compared to a traditional CNN with a similar number of parameters. Similar conclusions have been derived for deeper versions of CapsNets and CNNs. Moreover, our results unleash a key finding that the dynamic routing does not contribute much to improving the CapsNets' robustness. Indeed, the main generalization contribution is due to the hierarchical feature learning through capsules.
引用
收藏
页数:9
相关论文
共 50 条
[21]   On the Robustness of Bayesian Neural Networks to Adversarial Attacks [J].
Bortolussi, Luca ;
Carbone, Ginevra ;
Laurenti, Luca ;
Patane, Andrea ;
Sanguinetti, Guido ;
Wicker, Matthew .
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2025, 36 (04) :6679-6692
[22]   Modeling Network Diversity for Evaluating the Robustness of Networks against Zero-Day Attacks [J].
Wang, Lingyu ;
Zhang, Mengyuan ;
Jajodia, Sushil ;
Singhal, Anoop ;
Albanese, Massimiliano .
COMPUTER SECURITY - ESORICS 2014, PT II, 2014, 8713 :494-511
[23]   RoHNAS: A Neural Architecture Search Framework With Conjoint Optimization for Adversarial Robustness and Hardware Efficiency of Convolutional and Capsule Networks [J].
Marchisio, Alberto ;
Mrazek, Vojtech ;
Massa, Andrea ;
Bussolino, Beatrice ;
Martina, Maurizio ;
Shafique, Muhammad .
IEEE ACCESS, 2022, 10 :109043-109055
[24]   On the Intrinsic Robustness of NVM Crossbars Against Adversarial Attacks [J].
Roy, Deboleena ;
Chakraborty, Indranil ;
Ibrayev, Timur ;
Roy, Kaushik .
2021 58TH ACM/IEEE DESIGN AUTOMATION CONFERENCE (DAC), 2021, :565-570
[25]   Robustness and Transferability of Adversarial Attacks on Different Image Classification Neural Networks [J].
Smagulova, Kamilya ;
Bacha, Lina ;
Fouda, Mohammed E. ;
Kanj, Rouwaida ;
Eltawil, Ahmed .
ELECTRONICS, 2024, 13 (03)
[26]   Robustness of Bayesian Neural Networks to White-Box Adversarial Attacks [J].
Uchendu, Adaku ;
Campoy, Daniel ;
Menart, Christopher ;
Hildenbrandt, Alexandra .
2021 IEEE FOURTH INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND KNOWLEDGE ENGINEERING (AIKE 2021), 2021, :72-80
[27]   AdvCapsNet: To defense adversarial attacks based on Capsule networks* [J].
Li, Yueqiao ;
Su, Hang ;
Zhu, Jun .
JOURNAL OF VISUAL COMMUNICATION AND IMAGE REPRESENTATION, 2021, 75
[28]   RobQuNNs: A Methodology for Robust Quanvolutional Neural Networks against Adversarial Attacks [J].
El Maouaki, Walid ;
Marchisio, Alberto ;
Said, Taoufik ;
Shafique, Muhammad ;
Bennai, Mohamed .
2024 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING CHALLENGES AND WORKSHOPS, ICIPCW, 2024, :4090-4095
[29]   Defending Against Adversarial Attacks in Deep Neural Networks [J].
You, Suya ;
Kuo, C-C Jay .
ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING FOR MULTI-DOMAIN OPERATIONS APPLICATIONS, 2019, 11006
[30]   Securing Deep Spiking Neural Networks against Adversarial Attacks through Inherent Structural Parameters [J].
El-Allami, Rida ;
Marchisio, Alberto ;
Shafique, Muhammad ;
Alouani, Ihsen .
PROCEEDINGS OF THE 2021 DESIGN, AUTOMATION & TEST IN EUROPE CONFERENCE & EXHIBITION (DATE 2021), 2021, :774-779