共 50 条
Dual Teacher Knowledge Distillation With Domain Alignment for Face Anti-Spoofing
被引:0
|作者:
Kong, Zhe
[1
]
Zhang, Wentian
[2
]
Wang, Tao
[3
]
Zhang, Kaihao
[4
]
Li, Yuexiang
[5
]
Tang, Xiaoying
[6
]
Luo, Wenhan
[1
,7
]
机构:
[1] Sun Yat Sen Univ, Sch Cyber Sci & Technol, Shenzhen Campus, Shenzhen 518060, Peoples R China
[2] Shenzhen Univ, Coll Comp Sci & Software Engn, Shenzhen 518060, Peoples R China
[3] Nanjing Univ, State Key Lab Novel Software Technol, Nanjing 210093, Peoples R China
[4] Harbin Inst Technol, Shenzhen 518055, Peoples R China
[5] Guangxi Med Univ, Life Sci Inst, Nanning 530021, Guangxi, Peoples R China
[6] Southern Univ Sci & Technol, Dept Elect & Elect Engn, Shenzhen 518055, Peoples R China
[7] Hong Kong Univ Sci & Technol, Div Emerging Interdisciplinary Areas, Hong Kong, Peoples R China
基金:
中国国家自然科学基金;
关键词:
Face recognition;
Feature extraction;
Task analysis;
Data models;
Training;
Perturbation methods;
Testing;
Face anti-spoofing;
knowledge distillation;
domain generalization;
adversarial attack;
D O I:
10.1109/TCSVT.2024.3451294
中图分类号:
TM [电工技术];
TN [电子技术、通信技术];
学科分类号:
0808 ;
0809 ;
摘要:
Face recognition systems have raised concerns due to their vulnerability to different presentation attacks, and system security has become an increasingly critical concern. Although many face anti-spoofing (FAS) methods perform well in intra-dataset scenarios, their generalization remains a challenge. To address this issue, some methods adopt domain adversarial training (DAT) to extract domain-invariant features. Differently, in this paper, we propose a domain adversarial attack (DAA) method by adding perturbations to the input images, which makes them indistinguishable across domains and enables domain alignment. Moreover, since models trained on limited data and types of attacks cannot generalize well to unknown attacks, we propose a dual perceptual and generative knowledge distillation framework for face anti-spoofing that utilizes pre-trained face-related models containing rich face priors. Specifically, we adopt two different face-related models as teachers to transfer knowledge to the target student model. The pre-trained teacher models are not from the task of face anti-spoofing but from perceptual and generative tasks, respectively, which implicitly augment the data. By combining both DAA and dual-teacher knowledge distillation, we develop a dual teacher knowledge distillation with domain alignment framework (DTDA) for face anti-spoofing. The advantage of our proposed method has been verified through extensive ablation studies and comparison with state-of-the-art methods on public datasets across multiple protocols.
引用
收藏
页码:13177 / 13189
页数:13
相关论文