Multi-task Learning-based Black-box Adversarial Attack on Face Recognition Systems

被引:0
作者
Kong, Jiefang [1 ]
Wang, Huabin [1 ]
Zhou, Jiacheng [2 ]
Tao, Liang [1 ]
Zhang, Jingjing [3 ]
机构
[1] Anhui Univ, Sch Comp Sci & Technol, Anhui Prov Key Lab Multimodal Cognit Computat, Hefei, Anhui, Peoples R China
[2] Anhui Univ, Stony Brook Inst, Hefei, Anhui, Peoples R China
[3] Anhui Univ, Sch Comp Sci & Technol, Hefei, Anhui, Peoples R China
来源
2024 9TH INTERNATIONAL CONFERENCE ON SIGNAL AND IMAGE PROCESSING, ICSIP | 2024年
关键词
adversarial attacks; multi-task learning; black-box attacks; face recognition;
D O I
10.1109/ICSIP61881.2024.10671427
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In recent years, deep learning techniques have achieved significant success in many computer vision tasks. However, security concerns have increased as adversarial attacks have discovered potential vulnerabilities in deep learning-based systems. Therefore, a large number of adversarial defense strategies have been developed to improve the security and robustness of FR systems. Introducing an auxiliary model for the face recognition model to enhance the system security is a common approach for adversarial defense which the adversarial examples generated using one model are unlikely to pass when another model is chosen. Second, one of the challenges of face recognition (FR) attacks is that currently the targeted face recognition models are black-box in nature, i.e., the attacker does not have access to their internal relevant parameters and gradient information. As a result, the mobility of samples is poor and the attack performance is low, especially for online commercial FR systems. Therefore, this paper proposes a similarity-based shared gradient adversarial attack algorithm to improve the sample mobility. From the perspective of multi-tasking, the algorithm selects the alternative model (AR) as the auxiliary model, develops a multi-task local optimization strategy and a cross-task gradient mapping strategy, and constructs a mapping mechanism between the two models to share the gradient information, which facilitates weighted fusion of the generated perturbations and avoids the oscillations caused by different models due to the differences in gradients and parameters, thus improves the generalization ability, and makes the generated adversarial examples more efficient. Thus, the generated adversarial examples can attack multiple models at the same time, which greatly improves the transferability and robustness of the adversarial samples, and greatly improves the attacking power. A large number of experiments show that the success rate has been greatly improved.
引用
收藏
页码:554 / 558
页数:5
相关论文
共 25 条
[11]   AdvHat: Real-World Adversarial Attack on ArcFace Face ID System [J].
Komkov, Stepan ;
Petiushko, Aleksandr .
2020 25TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2021, :819-826
[12]  
Kurakin A., 2017, ARTIF INTELL SAF SEC
[13]  
Kurakin A, 2017, Arxiv, DOI arXiv:1611.01236
[14]  
Lansley M., 2019, WORKSH P 27 INT C CA
[15]   Deep Learning Face Attributes in the Wild [J].
Liu, Ziwei ;
Luo, Ping ;
Wang, Xiaogang ;
Tang, Xiaoou .
2015 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2015, :3730-3738
[16]   DeepFool: a simple and accurate method to fool deep neural networks [J].
Moosavi-Dezfooli, Seyed-Mohsen ;
Fawzi, Alhussein ;
Frossard, Pascal .
2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, :2574-2582
[17]   U-Net: Convolutional Networks for Biomedical Image Segmentation [J].
Ronneberger, Olaf ;
Fischer, Philipp ;
Brox, Thomas .
MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION, PT III, 2015, 9351 :234-241
[18]   Accessorize to a Crime: Real and Stealthy Attacks on State-of-the-Art Face Recognition [J].
Sharif, Mahmood ;
Bhagavatula, Sruti ;
Reiter, Michael K. ;
Bauer, Lujo .
CCS'16: PROCEEDINGS OF THE 2016 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, 2016, :1528-1540
[19]   A Deep Face Identification Network Enhanced by Facial Attributes Prediction [J].
Taherkhani, Fariborz ;
Nasrabadi, Nasser M. ;
Dawson, Jeremy .
PROCEEDINGS 2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS (CVPRW), 2018, :666-673
[20]   Enhancing the Transferability of Adversarial Attacks through Variance Tuning [J].
Wang, Xiaosen ;
He, Kun .
2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, :1924-1933