Crafting Transferable Adversarial Examples Against Face Recognition via Gradient Eroding

被引:0
作者
Zhou H. [1 ]
Wang Y. [2 ]
Tan Y.-A. [2 ]
Wu S. [2 ]
Zhao Y. [2 ]
Zhang Q. [1 ]
Li Y. [1 ]
机构
[1] Beijing Institute of Technology, School of Computer Science and Technology, Beijing
[2] Beijing Institute of Technology, School of Cyberspace Science and Technology, Beijing
来源
IEEE Transactions on Artificial Intelligence | 2024年 / 5卷 / 01期
关键词
Adversarial example; black-box attack; face recognition (FR); transfer attack; transferability;
D O I
10.1109/TAI.2023.3253083
中图分类号
学科分类号
摘要
In recent years, deep neural networks (DNNs) have made significant progress on face recognition (FR). However, DNNs have been found to be vulnerable to adversarial examples, leading to fatal consequences in real-world applications. This article focuses on improving the transferability of adversarial examples against FR models. We propose gradient eroding (GE) to make the gradient of the residual blocks more diverse, by eroding the back-propagation dynamically. We also propose a novel black-box adversarial attack named corrasion attack based on GE. Extensive experiments demonstrate that our approach can effectively improve the transferability of adversarial attacks against FR models. Our approach overperforms 29.35% in fooling rate than state-of-the-art black-box attacks. Leveraging adversarial training with adversarial examples generated by us, the robustness of models can be improved by up to 43.2%. Besides, corrasion attack successfully breaks two online FR systems, achieving a highest fooling rate of 89.8%. © 2020 IEEE.
引用
收藏
页码:412 / 419
页数:7
相关论文
empty
未找到相关数据