M-SAN: a patch-based transferable adversarial attack using the multi-stack adversarial network

被引:0
作者
Agrawal, Khushabu [1 ]
Bhatnagar, Charul [1 ]
机构
[1] GLA Univ, Comp Engn & Applicat, Mathura, India
关键词
adversarial attack; black-box attack; patch-based attack; target attack; untargeted attack;
D O I
10.1117/1.JEI.32.2.023033
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
In recent times, deep neural networks (DNN) have been extensively used in several areas and have achieved great success. State-of-the-art face recognition (FR) systems have gained high accuracy using DNN. However, researchers have found that DNN-based systems fail when facing an adversarial attack on images. In adversarial attacks, the adversary modifies the face images in a manner that the human does not perceive the changes in the generated image but FR systems are unable to recognize the faces correctly. We proposed a method to generate an adversarial attack. A multistack adversarial network (M-SAN) patch-based attack is generated using the generative adversarial network under the black-box settings. The M-SAN attack targets the features of the face images using a patch to fool the FR model as targeted and untargeted attacks. In the past, several attack generation methods have been presented under white-box settings. However, white-box settings need a model architecture and information about its parameters. Due to this, a single white-box attack is not able to fool different FR models. We propose an attack generation approach that is based on black-box settings in which an attacker does not have access to the target model parameters. The attack is generated with the help of the surrogate model and then transferred to the various target models. The proposed M-SAN attack is applied to FR models including FaceNet, ArcFace, and CosFace on the labeled face in the wild dataset.
引用
收藏
页数:21
相关论文
共 45 条
[1]  
Agrawal K., 2021, INT C INTELL TECHNOL, P1
[2]   Fooling a Face Recognition System with a Marker-Free Label-Consistent Backdoor Attack [J].
Cauli, Nino ;
Ortis, Alessandro ;
Battiato, Sebastiano .
IMAGE ANALYSIS AND PROCESSING, ICIAP 2022, PT II, 2022, 13232 :176-185
[3]   Fast Geometrically-Perturbed Adversarial Faces [J].
Dabouei, Ali ;
Soleymani, Sobhan ;
Dawson, Jeremy ;
Nasrabadi, Nasser M. .
2019 IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV), 2019, :1979-1988
[4]  
Deb D., 2020, arXiv
[5]   Benchmarking Adversarial Robustness on Image Classification [J].
Dong, Yinpeng ;
Fu, Qi-An ;
Yang, Xiao ;
Pang, Tianyu ;
Su, Hang ;
Xiao, Zihao ;
Zhu, Jun .
2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2020, :318-328
[6]   Efficient Decision-based Black-box Adversarial Attacks on Face Recognition [J].
Dong, Yinpeng ;
Su, Hang ;
Wu, Baoyuan ;
Li, Zhifeng ;
Liu, Wei ;
Zhang, Tong ;
Zhu, Jun .
2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, :7706-7714
[7]   Boosting Adversarial Attacks with Momentum [J].
Dong, Yinpeng ;
Liao, Fangzhou ;
Pang, Tianyu ;
Su, Hang ;
Zhu, Jun ;
Hu, Xiaolin ;
Li, Jianguo .
2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, :9185-9193
[8]   Robust Physical-World Attacks on Deep Learning Visual Classification [J].
Eykholt, Kevin ;
Evtimov, Ivan ;
Fernandes, Earlence ;
Li, Bo ;
Rahmati, Amir ;
Xiao, Chaowei ;
Prakash, Atul ;
Kohno, Tadayoshi ;
Song, Dawn .
2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, :1625-1634
[9]  
Goswami G, 2018, AAAI CONF ARTIF INTE, P6829
[10]  
Hernandez-Ortega J, 2021, Arxiv, DOI [arXiv:2111.11794, DOI 10.48550/ARXIV.2111.11794]