Powerful Physical Adversarial Examples Against Practical Face Recognition Systems

被引:9
|
作者
Singh, Inderjeet [1 ]
Araki, Toshinori [1 ]
Kakizaki, Kazuya [1 ]
机构
[1] NEC Corp Ltd, Kawasaki, Kanagawa, Japan
来源
2022 IEEE/CVF WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION WORKSHOPS (WACVW 2022) | 2022年
关键词
D O I
10.1109/WACVW54805.2022.00036
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
learning (ML)-based safety-critical applications are vulnerable to carefully crafted input instances called adversarial examples (AXs). An adversary can conveniently attack these target systems from digital as well as physical worlds. This paper aims to the generation of robust physical AXs against face recognition systems. We present a novel smoothness loss function and a patch-noise combo attack for realizing powerful physical AXs. The smoothness loss interjects the concept of delayed constraints during the attack generation process, thereby causing better handling of optimization complexity and smoother AXs for the physical domain. The patch-noise combo attack combines patch noise and imperceptibly small noises from different distributions to generate powerful registration-based physical AXs. An extensive experimental analysis found that our smoothness loss results in robust and more transferable digital and physical AXs than the conventional techniques. Notably, our smoothness loss results in a 1.17 and 1.97 times better mean attack success rate (ASR) in physical white-box and black-box attacks, respectively. Our patch-noise combo attack furthers the performance gains and results in 2.39 and 4.74 times higher mean ASR than conventional technique in physical world white-box and black-box attacks, respectively.
引用
收藏
页码:301 / 310
页数:10
相关论文
共 50 条
  • [1] On Brightness Agnostic Adversarial Examples Against Face Recognition Systems
    Singh, Inderjeet
    Momiyama, Satoru
    Kakizaki, Kazuya
    Araki, Toshinori
    PROCEEDINGS OF THE 20TH INTERNATIONAL CONFERENCE OF THE BIOMETRICS SPECIAL INTEREST GROUP (BIOSIG 2021), 2021, 315
  • [2] Practical Adversarial Attacks Against Speaker Recognition Systems
    Li, Zhuohang
    Shi, Cong
    Xie, Yi
    Liu, Jian
    Yuan, Bo
    Chen, Yingying
    PROCEEDINGS OF THE 21ST INTERNATIONAL WORKSHOP ON MOBILE COMPUTING SYSTEMS AND APPLICATIONS (HOTMOBILE'20), 2020, : 9 - 14
  • [3] Crafting Transferable Adversarial Examples Against Face Recognition via Gradient Eroding
    Zhou H.
    Wang Y.
    Tan Y.-A.
    Wu S.
    Zhao Y.
    Zhang Q.
    Li Y.
    IEEE Transactions on Artificial Intelligence, 2024, 5 (01): : 412 - 419
  • [4] GENERATING ADVERSARIAL EXAMPLES BY MAKEUP ATTACKS ON FACE RECOGNITION
    Zhu, Zheng-An
    Lu, Yun-Zhong
    Chiang, Chen-Kuo
    2019 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2019, : 2516 - 2520
  • [5] Adversarial Relighting Against Face Recognition
    Zhang, Qian
    Guo, Qing
    Gao, Ruijun
    Juefei-Xu, Felix
    Yu, Hongkai
    Feng, Wei
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2024, 19 : 9145 - 9157
  • [6] Adversarial Examples to Fool Iris Recognition Systems
    Soleymani, Sobban
    Dabouei, Ali
    Dawson, Jeremy
    Nasrabadi, Nasser M.
    2019 INTERNATIONAL CONFERENCE ON BIOMETRICS (ICB), 2019,
  • [7] Generating practical adversarial examples against learning-based network intrusion detection systems
    Kumar, Vivek
    Kumar, Kamal
    Singh, Maheep
    ANNALS OF TELECOMMUNICATIONS, 2025, 80 (3-4) : 209 - 226
  • [8] Adversarial Attacks Against Face Recognition: A Comprehensive Study
    Vakhshiteh, Fatemeh
    Nickabadi, Ahmad
    Ramachandra, Raghavendra
    IEEE ACCESS, 2021, 9 : 92735 - 92756
  • [9] Universal Adversarial Spoofing Attacks against Face Recognition
    Amada, Takuma
    Liew, Seng Pei
    Kakizaki, Kazuya
    Araki, Toshinori
    2021 INTERNATIONAL JOINT CONFERENCE ON BIOMETRICS (IJCB 2021), 2021,
  • [10] Adversarial examples for replay attacks against CNN-based face recognition with anti-spoofing capability
    Zhang, Bowen
    Tondi, Benedetta
    Barni, Mauro
    COMPUTER VISION AND IMAGE UNDERSTANDING, 2020, 197