Artificial Immune System of Secure Face Recognition Against Adversarial Attacks

被引:0
作者
Ren, Min [1 ]
Wang, Yunlong [2 ]
Zhu, Yuhao [3 ]
Huang, Yongzhen [1 ]
Sun, Zhenan [2 ]
Li, Qi [2 ]
Tan, Tieniu [2 ]
机构
[1] Beijing Normal Univ, Sch Artificial Intelligence, Beijing, Peoples R China
[2] Chinese Acad Sci, Inst Automat, State Key Lab Management & Control Complex Syst, MAIS, Beijing, Peoples R China
[3] China Acad Railway Sci, Postgrad Dept, Beijing, Peoples R China
关键词
Adversarial defense; Face recognition; Artificial immune system; Self-supervised adversarial learning;
D O I
10.1007/s11263-024-02153-0
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Deep learning-based face recognition models are vulnerable to adversarial attacks. In contrast to general noises, the presence of imperceptible adversarial noises can lead to catastrophic errors in deep face recognition models. The primary difference between adversarial noise and general noise lies in its specificity. Adversarial attack methods give rise to noises tailored to the characteristics of the individual image and recognition model at hand. Diverse samples and recognition models can engender specific adversarial noise patterns, which pose significant challenges for adversarial defense. Addressing this challenge in the realm of face recognition presents a more formidable endeavor due to the inherent nature of face recognition as an open set task. In order to tackle this challenge, it is imperative to employ customized processing for each individual input sample. Drawing inspiration from the biological immune system, which can identify and respond to various threats, this paper aims to create an artificial immune system to provide adversarial defense for face recognition. The proposed defense model incorporates the principles of antibody cloning, mutation, selection, and memory mechanisms to generate a distinct "antibody" for each input sample, wherein the term "antibody" refers to a specialized noise removal manner. Furthermore, we introduce a self-supervised adversarial training mechanism that serves as a simulated rehearsal of immune system invasions. Extensive experimental results demonstrate the efficacy of the proposed method, surpassing state-of-the-art adversarial defense methods. The source code is available here, or you can visit this website: https://github.com/RenMin1991/SIDE
引用
收藏
页码:5718 / 5740
页数:23
相关论文
共 91 条
[1]  
Aleksander M., 2018, P INT C LEARN REPR
[2]   Hilbert-based Generative Defense for Adversarial Examples [J].
Bai, Yang ;
Feng, Yan ;
Wang, Yisen ;
Dai, Tao ;
Xia, Shu-Tao ;
Jiang, Yong .
2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019), 2019, :4783-4792
[3]  
BURNET F. M., 1957, AUSTRALIAN JOUR SCI, V20, P67
[4]   Mitigating Evasion Attacks to Deep Neural Networks via Region-based Classification [J].
Cao, Xiaoyu ;
Gong, Neil Zhenqiang .
33RD ANNUAL COMPUTER SECURITY APPLICATIONS CONFERENCE (ACSAC 2017), 2017, :278-287
[5]   Towards Evaluating the Robustness of Neural Networks [J].
Carlini, Nicholas ;
Wagner, David .
2017 IEEE SYMPOSIUM ON SECURITY AND PRIVACY (SP), 2017, :39-57
[6]   Architectural Adversarial Robustness: The Case for Deep Pursuit [J].
Cazenavette, George ;
Murdock, Calvin ;
Lucey, Simon .
2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, :7146-7154
[7]   Solving job shop scheduling problems using artificial immune system [J].
Chandrasekaran, M. ;
Asokan, P. ;
Kumanan, S. ;
Balamurugan, T. ;
Nickolas, S. .
INTERNATIONAL JOURNAL OF ADVANCED MANUFACTURING TECHNOLOGY, 2006, 31 (5-6) :580-593
[8]   Improving Adversarial Robustness via Guided Complement Entropy [J].
Chen, Hao-Yun ;
Liang, Jhao-Hong ;
Chang, Shih-Chieh ;
Pan, Jia-Yu ;
Chen, Yu-Ting ;
Wei, Wei ;
Juan, Da-Cheng .
2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019), 2019, :4880-4888
[9]   An immune algorithm for protein structure prediction on lattice models [J].
Cutello, Vincenzo ;
Nicosia, Giuseppe ;
Pavone, Mario ;
Timmis, Jonathan .
IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, 2007, 11 (01) :101-117
[10]  
Das N., 2017, arXiv