Powerful Physical Adversarial Examples Against Practical Face Recognition Systems

被引:9
作者
Singh, Inderjeet [1 ]
Araki, Toshinori [1 ]
Kakizaki, Kazuya [1 ]
机构
[1] NEC Corp Ltd, Kawasaki, Kanagawa, Japan
来源
2022 IEEE/CVF WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION WORKSHOPS (WACVW 2022) | 2022年
关键词
D O I
10.1109/WACVW54805.2022.00036
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
learning (ML)-based safety-critical applications are vulnerable to carefully crafted input instances called adversarial examples (AXs). An adversary can conveniently attack these target systems from digital as well as physical worlds. This paper aims to the generation of robust physical AXs against face recognition systems. We present a novel smoothness loss function and a patch-noise combo attack for realizing powerful physical AXs. The smoothness loss interjects the concept of delayed constraints during the attack generation process, thereby causing better handling of optimization complexity and smoother AXs for the physical domain. The patch-noise combo attack combines patch noise and imperceptibly small noises from different distributions to generate powerful registration-based physical AXs. An extensive experimental analysis found that our smoothness loss results in robust and more transferable digital and physical AXs than the conventional techniques. Notably, our smoothness loss results in a 1.17 and 1.97 times better mean attack success rate (ASR) in physical white-box and black-box attacks, respectively. Our patch-noise combo attack furthers the performance gains and results in 2.39 and 4.74 times higher mean ASR than conventional technique in physical world white-box and black-box attacks, respectively.
引用
收藏
页码:301 / 310
页数:10
相关论文
共 50 条
[21]   Towards Transferable Adversarial Attack Against Deep Face Recognition [J].
Zhong, Yaoyao ;
Deng, Weihong .
IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2021, 16 :1452-1466
[22]   AdvFAS: A robust face anti-spoofing framework against adversarial examples [J].
Chen, Jiawei ;
Yang, Xiao ;
Yin, Heng ;
Ma, Mingzhi ;
Chen, Bihui ;
Peng, Jianteng ;
Guo, Yandong ;
Yin, Zhaoxia ;
Su, Hang .
COMPUTER VISION AND IMAGE UNDERSTANDING, 2023, 235
[23]   Adversarial Examples in Physical World [J].
Wang, Jiakai .
PROCEEDINGS OF THE THIRTIETH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, IJCAI 2021, 2021, :4925-4926
[24]   Noise Flooding for Detecting Audio Adversarial Examples Against Automatic Speech Recognition [J].
Rajaratnam, Krishan ;
Kalita, Jugal .
2018 IEEE INTERNATIONAL SYMPOSIUM ON SIGNAL PROCESSING AND INFORMATION TECHNOLOGY (ISSPIT), 2018, :197-201
[25]   AudioGuard: Speech Recognition System Robust against Optimized Audio Adversarial Examples [J].
Kwon, Hyun .
MULTIMEDIA TOOLS AND APPLICATIONS, 2023, 83 (20) :57943-57962
[26]   VLA: A practical visible light-based atack on face recognition systems in physical world [J].
Shen, Meng ;
Liao, Zelin ;
Zhu, Liehuang ;
Xu, Ke ;
Du, Xiaojiang .
Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, 2019, 3 (03)
[27]   Artificial Immune System of Secure Face Recognition Against Adversarial Attacks [J].
Ren, Min ;
Wang, Yunlong ;
Zhu, Yuhao ;
Huang, Yongzhen ;
Sun, Zhenan ;
Li, Qi ;
Tan, Tieniu .
INTERNATIONAL JOURNAL OF COMPUTER VISION, 2024, 132 (12) :5718-5740
[28]   Adversarial Examples Improve Image Recognition [J].
Xie, Cihang ;
Tan, Mingxing ;
Gong, Boqing ;
Wang, Jiang ;
Yuille, Alan L. ;
Le, Quoc, V .
2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2020, :816-825
[29]   Privacy-preserving face recognition against adversarial sample perturbations [J].
Ma, Caixia ;
Jia, Chunfu ;
Cai, Zhipeng ;
Du, Ruizhong ;
Li, Mingyue ;
Ha, Guanxiong .
Xi'an Dianzi Keji Daxue Xuebao/Journal of Xidian University, 2025, 52 (02) :190-200
[30]   Imperceptible Physical Attack Against Face Recognition Systems via LED Illumination Modulation [J].
Fang, Junbin ;
Jiang, Canjian ;
Jiang, You ;
Lin, Puxi ;
Chen, Zhaojie ;
Sun, Yujing ;
Yiu, Siu-Ming ;
Jiang, Zoe L. .
IEEE TRANSACTIONS ON BIG DATA, 2025, 11 (02) :461-473