Crafting Transferable Adversarial Examples Against Face Recognition via Gradient Eroding

被引:0
|
作者
Zhou H. [1 ]
Wang Y. [2 ]
Tan Y.-A. [2 ]
Wu S. [2 ]
Zhao Y. [2 ]
Zhang Q. [1 ]
Li Y. [1 ]
机构
[1] Beijing Institute of Technology, School of Computer Science and Technology, Beijing
[2] Beijing Institute of Technology, School of Cyberspace Science and Technology, Beijing
来源
IEEE Transactions on Artificial Intelligence | 2024年 / 5卷 / 01期
关键词
Adversarial example; black-box attack; face recognition (FR); transfer attack; transferability;
D O I
10.1109/TAI.2023.3253083
中图分类号
学科分类号
摘要
In recent years, deep neural networks (DNNs) have made significant progress on face recognition (FR). However, DNNs have been found to be vulnerable to adversarial examples, leading to fatal consequences in real-world applications. This article focuses on improving the transferability of adversarial examples against FR models. We propose gradient eroding (GE) to make the gradient of the residual blocks more diverse, by eroding the back-propagation dynamically. We also propose a novel black-box adversarial attack named corrasion attack based on GE. Extensive experiments demonstrate that our approach can effectively improve the transferability of adversarial attacks against FR models. Our approach overperforms 29.35% in fooling rate than state-of-the-art black-box attacks. Leveraging adversarial training with adversarial examples generated by us, the robustness of models can be improved by up to 43.2%. Besides, corrasion attack successfully breaks two online FR systems, achieving a highest fooling rate of 89.8%. © 2020 IEEE.
引用
收藏
页码:412 / 419
页数:7
相关论文
共 50 条
  • [1] TransMix: Crafting highly transferable adversarial examples to evade face recognition models
    Khedr, Yasmeen M.
    Liu, Xin
    He, Kun
    IMAGE AND VISION COMPUTING, 2024, 146
  • [2] Crafting transferable adversarial examples via contaminating the salient feature variance
    Ren, Yuchen
    Zhu, Hegui
    Sui, Xiaoyan
    Liu, Chong
    INFORMATION SCIENCES, 2023, 644
  • [3] GNP ATTACK: TRANSFERABLE ADVERSARIAL EXAMPLES VIA GRADIENT NORM PENALTY
    Wu, Tao
    Luo, Tie
    Wunsch, Donald C.
    2023 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, ICIP, 2023, : 3110 - 3114
  • [4] Towards Transferable Adversarial Attack Against Deep Face Recognition
    Zhong, Yaoyao
    Deng, Weihong
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2021, 16 : 1452 - 1466
  • [5] Hierarchical feature transformation attack: Generate transferable adversarial examples for face recognition
    Li, Yuanbo
    Hu, Cong
    Wang, Rui
    Wu, Xiaojun
    APPLIED SOFT COMPUTING, 2025, 172
  • [6] Toward Transferable Attack via Adversarial Diffusion in Face Recognition
    Hu, Cong
    Li, Yuanbo
    Feng, Zhenhua
    Wu, Xiaojun
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2024, 19 : 5506 - 5519
  • [7] On Brightness Agnostic Adversarial Examples Against Face Recognition Systems
    Singh, Inderjeet
    Momiyama, Satoru
    Kakizaki, Kazuya
    Araki, Toshinori
    PROCEEDINGS OF THE 20TH INTERNATIONAL CONFERENCE OF THE BIOMETRICS SPECIAL INTEREST GROUP (BIOSIG 2021), 2021, 315
  • [8] Sibling-Attack: Rethinking Transferable Adversarial Attacks against Face Recognition
    Li, Zexin
    Yin, Bangjie
    Yao, Taiping
    Guo, Junfeng
    Ding, Shouhong
    Chen, Simin
    Liu, Cong
    2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2023, : 24626 - 24637
  • [9] ANF: Crafting Transferable Adversarial Point Clouds via Adversarial Noise Factorization
    Chen, Hai
    Zhao, Shu
    Yang, Xiao
    Yan, Huanqian
    He, Yuan
    Xue, Hui
    Qian, Fulan
    Su, Hang
    IEEE TRANSACTIONS ON BIG DATA, 2025, 11 (02) : 835 - 847
  • [10] Powerful Physical Adversarial Examples Against Practical Face Recognition Systems
    Singh, Inderjeet
    Araki, Toshinori
    Kakizaki, Kazuya
    2022 IEEE/CVF WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION WORKSHOPS (WACVW 2022), 2022, : 301 - 310