Invisible Adversarial Attacks on Deep Learning-Based Face Recognition Models

被引:2
作者
Lin, Chih-Yang [1 ]
Chen, Feng-Jie [2 ]
Ng, Hui-Fuang [3 ]
Lin, Wei-Yang [2 ,4 ]
机构
[1] Natl Cent Univ, Dept Mech Engn, Taoyuan 32001, Taiwan
[2] Natl Chung Cheng Univ, Dept Comp Sci & Informat Engn, Chiayi 62102, Taiwan
[3] Univ Tunku Abdul Rahman, Dept Comp Sci, Kampar 31900, Malaysia
[4] Natl Chung Cheng Univ, Adv Inst Mfg High Tech Innovat, Chiayi 62102, Taiwan
来源
IEEE ACCESS | 2023年 / 11卷
关键词
Face recognition; Perturbation methods; Deep learning; Image segmentation; Facial features; Neural networks; Adversarial attack; deep learning; face recognition;
D O I
10.1109/ACCESS.2023.3279488
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Deep learning technology has grown rapidly in recent years and achieved tremendous success in the field of computer vision. At present, many deep learning technologies have been applied in daily life, such as face recognition systems. However, as human life increasingly relies on deep neural networks, the potential harms of neural networks are being revealed, particularly in terms of deep neural network security. More and more studies have shown that existing deep learning-based face recognition models are vulnerable to attacks by adversarial samples, resulting in misjudgments that could have serious consequences. However, existing adversarial face images are rather easy to identify with the naked eye, so it is difficult for attackers to carry out attacks on face recognition systems in practice. This paper proposes a method for generating adversarial face images that are indistinguishable from the source images based on facial landmark detection and superpixel segmentation. First, the eyebrows, eyes, nose, and mouth regions are extracted from the face image using a facial landmark detection algorithm. Next, the superpixel segmentation algorithm is used to include the pixels neighboring the extracted facial landmarks with similar pixel values. Lastly, the segmented regions are used as masks to guide existing attack methods to insert adversarial noise within the masked areas. Experimental results show that our method can generate adversarial samples with high Structural Similarity Index Measure (SSIM) values at the cost of a small percentage of attack success rate. In addition, to simulate real-time physical attacks, printouts of the adversarial images generated by the proposed method are presented to the face recognition system via a camera and are still able to fool the face recognition model. Experimental results indicated that the proposed method can successfully perform adversarial attacks on face recognition systems in real-world scenarios.
引用
收藏
页码:51567 / 51577
页数:11
相关论文
共 21 条
  • [1] A Dual-stream Framework for 3D Mask Face Presentation Attack Detection
    Chen, Shen
    Yao, Taiping
    Zhang, Keyue
    Chen, Yang
    Sun, Ke
    Ding, Shouhong
    Li, Jilin
    Huang, Feiyue
    Ji, Rongrong
    [J]. 2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION WORKSHOPS (ICCVW 2021), 2021, : 834 - 841
  • [2] A tutorial on the cross-entropy method
    De Boer, PT
    Kroese, DP
    Mannor, S
    Rubinstein, RY
    [J]. ANNALS OF OPERATIONS RESEARCH, 2005, 134 (01) : 19 - 67
  • [3] Deb D., 2020, IEEE/IAPR INTERNATIONAL JOINT CONFERENCE ON BIOMETRICS (IJCB 2020), P1
  • [4] ArcFace: Additive Angular Margin Loss for Deep Face Recognition
    Deng, Jiankang
    Guo, Jia
    Xue, Niannan
    Zafeiriou, Stefanos
    [J]. 2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, : 4685 - 4694
  • [5] Goodfellow IJ, 2015, Arxiv, DOI [arXiv:1412.6572, DOI 10.48550/ARXIV.1412.6572]
  • [6] ComDefend: An Efficient Image Compression Model to Defend Adversarial Examples
    Jia, Xiaojun
    Wei, Xingxing
    Cao, Xiaochun
    Foroosh, Hassan
    [J]. 2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, : 6077 - 6085
  • [7] One Millisecond Face Alignment with an Ensemble of Regression Trees
    Kazemi, Vahid
    Sullivan, Josephine
    [J]. 2014 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2014, : 1867 - 1874
  • [8] Adversarial Deep Learning: A Survey on Adversarial Attacks and Defense Mechanisms on Image Classification
    Khamaiseh, Samer Y.
    Bagagem, Derek
    Al-Alaj, Abdullah
    Mancino, Mathew
    Alomari, Hakam W.
    [J]. IEEE ACCESS, 2022, 10 : 102266 - 102291
  • [9] AdvHat: Real-World Adversarial Attack on ArcFace Face ID System
    Komkov, Stepan
    Petiushko, Aleksandr
    [J]. 2020 25TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2021, : 819 - 826
  • [10] Kurakin Alexey, 2018, ARTIFICIAL INTELLIGE, P99, DOI DOI 10.1201/9781351251389-8