Robust Physical-World Attacks on Face Recognition

被引:32
作者
Zheng, Xin [1 ]
Fan, Yanbo [2 ]
Wu, Baoyuan [3 ]
Zhang, Yong [2 ]
Wang, Jue [2 ]
Pan, Shirui [4 ]
机构
[1] Monash Univ, Melbourne, Vic, Australia
[2] Tencent AI Lab, Shenzhen, Peoples R China
[3] Chinese Univ Hong Kong, Shenzhen Res Inst Big Data, Sch Data Sci, Shenzhen, Peoples R China
[4] Griffith Univ, Sch Informat & Commun Technol, Gold Coast, Qld, Australia
基金
中国国家自然科学基金;
关键词
Physical -world adversarial attack; Face recognition; Environmental variations; Curriculum learning;
D O I
10.1016/j.patcog.2022.109009
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Face recognition has been greatly facilitated by the development of deep neural networks (DNNs) and has been widely applied to many safety-critical applications. However, recent studies have shown that DNNs are very vulnerable to adversarial examples, raising severe concerns on the security of real-world face recognition. In this work, we study sticker-based physical attacks on face recognition for better un-derstanding its adversarial robustness. To this end, we first analyze in-depth the complicated physical -world conditions confronted by attacking face recognition, including the different variations of stickers, faces, and environmental conditions. Then, we propose a novel robust physical attack framework, dubbed PadvFace, to model these challenging variations specifically. Furthermore, we reveal that the attack com-plexities vary under different physical-world conditions and propose an efficient Curriculum Adversarial Attack (CAA) algorithm that gradually adapts adversarial stickers to environmental variations from easy to complex. Finally, we construct a standardized testing protocol to facilitate the fair evaluation of phys-ical attacks on face recognition, and extensive experiments on both physical dodging and impersonation attacks demonstrate the superior performance of the proposed method.(c) 2022 Elsevier Ltd. All rights reserved.
引用
收藏
页数:12
相关论文
共 45 条
[41]  
Zhang Yang, 2019, INT C LEARNING REPRE
[42]   Defenses Against Multi-sticker Physical Domain Attacks on Classifiers [J].
Zhao, Xinwei ;
Stamm, Matthew C. .
COMPUTER VISION - ECCV 2020 WORKSHOPS, PT I, 2020, 12535 :202-219
[43]   Seeing isn't Believing: Towards More Robust Adversarial Attack Against Real World Object Detectors [J].
Zhao, Yue ;
Zhu, Hong ;
Liang, Ruigang ;
Shen, Qintao ;
Zhang, Shengzhi ;
Chen, Kai .
PROCEEDINGS OF THE 2019 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY (CCS'19), 2019, :1989-2004
[44]  
Zolfi A., 2021, P IEEE CVF C COMP VI
[45]  
Zolfi A, 2022, Arxiv, DOI arXiv:2111.10759