Robust Physical-World Attacks on Face Recognition

被引:26
作者
Zheng, Xin [1 ]
Fan, Yanbo [2 ]
Wu, Baoyuan [3 ]
Zhang, Yong [2 ]
Wang, Jue [2 ]
Pan, Shirui [4 ]
机构
[1] Monash Univ, Melbourne, Vic, Australia
[2] Tencent AI Lab, Shenzhen, Peoples R China
[3] Chinese Univ Hong Kong, Shenzhen Res Inst Big Data, Sch Data Sci, Shenzhen, Peoples R China
[4] Griffith Univ, Sch Informat & Commun Technol, Gold Coast, Qld, Australia
基金
中国国家自然科学基金;
关键词
Physical -world adversarial attack; Face recognition; Environmental variations; Curriculum learning;
D O I
10.1016/j.patcog.2022.109009
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Face recognition has been greatly facilitated by the development of deep neural networks (DNNs) and has been widely applied to many safety-critical applications. However, recent studies have shown that DNNs are very vulnerable to adversarial examples, raising severe concerns on the security of real-world face recognition. In this work, we study sticker-based physical attacks on face recognition for better un-derstanding its adversarial robustness. To this end, we first analyze in-depth the complicated physical -world conditions confronted by attacking face recognition, including the different variations of stickers, faces, and environmental conditions. Then, we propose a novel robust physical attack framework, dubbed PadvFace, to model these challenging variations specifically. Furthermore, we reveal that the attack com-plexities vary under different physical-world conditions and propose an efficient Curriculum Adversarial Attack (CAA) algorithm that gradually adapts adversarial stickers to environmental variations from easy to complex. Finally, we construct a standardized testing protocol to facilitate the fair evaluation of phys-ical attacks on face recognition, and extensive experiments on both physical dodging and impersonation attacks demonstrate the superior performance of the proposed method.(c) 2022 Elsevier Ltd. All rights reserved.
引用
收藏
页数:12
相关论文
共 45 条
  • [1] Athalye A, 2018, PR MACH LEARN RES, V80
  • [2] Bengio Y., 2009, P 26 ANN INT C MACH, P41, DOI DOI 10.1145/1553374.1553380
  • [3] Self-restrained triplet loss for accurate masked face recognition
    Boutros, Fadi
    Damer, Naser
    Kirchbuchner, Florian
    Kuijper, Arjan
    [J]. PATTERN RECOGNITION, 2022, 124
  • [4] Cai Q.-Z, 2018, Curriculum Adversarial Training
  • [5] ShapeShifter: Robust Physical Adversarial Attack on Faster R-CNN Object Detector
    Chen, Shang-Tse
    Cornelius, Cory
    Martin, Jason
    Chau, Duen Horng
    [J]. MACHINE LEARNING AND KNOWLEDGE DISCOVERY IN DATABASES, ECML PKDD 2018, PT I, 2019, 11051 : 52 - 68
  • [6] ArcFace: Additive Angular Margin Loss for Deep Face Recognition
    Deng, Jiankang
    Guo, Jia
    Xue, Niannan
    Zafeiriou, Stefanos
    [J]. 2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, : 4685 - 4694
  • [7] Efficient Decision-based Black-box Adversarial Attacks on Face Recognition
    Dong, Yinpeng
    Su, Hang
    Wu, Baoyuan
    Li, Zhifeng
    Liu, Wei
    Zhang, Tong
    Zhu, Jun
    [J]. 2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, : 7706 - 7714
  • [8] Adversarial Camouflage: Hiding Physical-World Attacks with Natural Styles
    Duan, Ranjie
    Ma, Xingjun
    Wang, Yisen
    Bailey, James
    Qin, A. K.
    Yang, Yun
    [J]. 2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2020, : 997 - 1005
  • [9] Robust Physical-World Attacks on Deep Learning Visual Classification
    Eykholt, Kevin
    Evtimov, Ivan
    Fernandes, Earlence
    Li, Bo
    Rahmati, Amir
    Xiao, Chaowei
    Prakash, Atul
    Kohno, Tadayoshi
    Song, Dawn
    [J]. 2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, : 1625 - 1634
  • [10] Fan YB, 2017, AAAI CONF ARTIF INTE, P1877