Crafting Transferable Adversarial Examples Against Face Recognition via Gradient Eroding

被引:0
|
作者
Zhou H. [1 ]
Wang Y. [2 ]
Tan Y.-A. [2 ]
Wu S. [2 ]
Zhao Y. [2 ]
Zhang Q. [1 ]
Li Y. [1 ]
机构
[1] Beijing Institute of Technology, School of Computer Science and Technology, Beijing
[2] Beijing Institute of Technology, School of Cyberspace Science and Technology, Beijing
来源
IEEE Transactions on Artificial Intelligence | 2024年 / 5卷 / 01期
关键词
Adversarial example; black-box attack; face recognition (FR); transfer attack; transferability;
D O I
10.1109/TAI.2023.3253083
中图分类号
学科分类号
摘要
In recent years, deep neural networks (DNNs) have made significant progress on face recognition (FR). However, DNNs have been found to be vulnerable to adversarial examples, leading to fatal consequences in real-world applications. This article focuses on improving the transferability of adversarial examples against FR models. We propose gradient eroding (GE) to make the gradient of the residual blocks more diverse, by eroding the back-propagation dynamically. We also propose a novel black-box adversarial attack named corrasion attack based on GE. Extensive experiments demonstrate that our approach can effectively improve the transferability of adversarial attacks against FR models. Our approach overperforms 29.35% in fooling rate than state-of-the-art black-box attacks. Leveraging adversarial training with adversarial examples generated by us, the robustness of models can be improved by up to 43.2%. Besides, corrasion attack successfully breaks two online FR systems, achieving a highest fooling rate of 89.8%. © 2020 IEEE.
引用
收藏
页码:412 / 419
页数:7
相关论文
共 50 条
  • [21] Adv-Attribute: Inconspicuous and Transferable Adversarial Attack on Face Recognition
    Jia, Shuai
    Yin, Bangjie
    Yao, Taiping
    Ding, Shouhong
    Shen, Chunhua
    Yang, Xiaokang
    Ma, Chao
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,
  • [22] Transferable universal adversarial perturbations against speaker recognition systems
    Liu, Xiaochen
    Tan, Hao
    Zhang, Junjian
    Li, Aiping
    Gu, Zhaoquan
    WORLD WIDE WEB-INTERNET AND WEB INFORMATION SYSTEMS, 2024, 27 (03):
  • [23] Crafting imperceptible and transferable adversarial examples: leveraging conditional residual generator and wavelet transforms to deceive deepfake detection
    Li, Zhiyuan
    Jin, Xin
    Jiang, Qian
    Wang, Puming
    Lee, Shin-Jye
    Yao, Shaowen
    Zhou, Wei
    VISUAL COMPUTER, 2024, : 3329 - 3344
  • [24] AdvCheck: Characterizing adversarial examples via local gradient checking
    Chen, Ruoxi
    Jin, Haibo
    Chen, Jinyin
    Zheng, Haibin
    Zheng, Shilian
    Yang, Xiaoniu
    Yang, Xing
    COMPUTERS & SECURITY, 2024, 136
  • [25] Adversarial Attacks Against Face Recognition: A Comprehensive Study
    Vakhshiteh, Fatemeh
    Nickabadi, Ahmad
    Ramachandra, Raghavendra
    IEEE ACCESS, 2021, 9 : 92735 - 92756
  • [26] Universal Adversarial Spoofing Attacks against Face Recognition
    Amada, Takuma
    Liew, Seng Pei
    Kakizaki, Kazuya
    Araki, Toshinori
    2021 INTERNATIONAL JOINT CONFERENCE ON BIOMETRICS (IJCB 2021), 2021,
  • [27] Adversarial examples for replay attacks against CNN-based face recognition with anti-spoofing capability
    Zhang, Bowen
    Tondi, Benedetta
    Barni, Mauro
    COMPUTER VISION AND IMAGE UNDERSTANDING, 2020, 197
  • [28] Multi-layer Feature Augmentation Based Transferable Adversarial Examples Generation for Speaker Recognition
    Li, Zhuhai
    Zhang, Jie
    Guo, Wu
    ADVANCED INTELLIGENT COMPUTING TECHNOLOGY AND APPLICATIONS, PT IV, ICIC 2024, 2024, 14865 : 373 - 385
  • [29] Provable Defenses against Adversarial Examples via the Convex Outer Adversarial Polytope
    Wong, Eric
    Kolter, J. Zico
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 80, 2018, 80
  • [30] Detecting Adversarial Examples for Speech Recognition via Uncertainty Quantification
    Daeubener, Sina
    Schoenherr, Lea
    Fischer, Asja
    Kolossa, Dorothea
    INTERSPEECH 2020, 2020, : 4661 - 4665