A Multi-Teacher Assisted Knowledge Distillation Approach for Enhanced Face Image Authentication

被引:0
作者
Cheng, Tiancong [1 ]
Zhang, Ying [1 ]
Yin, Yifang [2 ]
Zimmermann, Roger [3 ]
Yu, Zhiwen [1 ]
Guo, Bin [1 ]
机构
[1] Northwestern Polytech Univ, Xian, Shaanxi, Peoples R China
[2] STAR Singapore, Inst Infocomm Res I2R, Singapore, Singapore
[3] Natl Univ Singapore, Singapore, Singapore
来源
PROCEEDINGS OF THE 2023 ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA RETRIEVAL, ICMR 2023 | 2023年
基金
中国国家自然科学基金;
关键词
face recognition; face anti-spoofing; face authentication; model compression; knowledge distillation; ATTENTION;
D O I
10.1145/3591106.3592280
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Recent deep-learning-based face recognition systems have achieved significant success. However, most existing face recognition systems are vulnerable to spoofing attacks where a copy of the face image is used to deceive the authentication. A number of solutions are developed to overcome this problem by building a separate face anti-spoofing model, which however brings in additional storage and computation requirements. Since both recognition and face anti-spoofing tasks stem from the analysis of the same face image, this paper explores a unified approach to reduce the original dual-model redundancy. To this end, we introduce a compressed multitask model to simultaneously perform both tasks in a lightweight manner, which has the potential to benefit lightweight IoT applications. Concretely, we regard the original two single-task deep models as teacher networks and propose a novel multi-teacher-assisted knowledge distillation method to guide our lightweight multi-task model to achieve satisfying performance on both tasks. Additionally, to reduce the large gap between the deep teachers and the light student, a comprehensive feature alignment is further integrated by distilling multi-layer features. Extensive experiments are carried out on two benchmark datasets, where we achieve the task accuracy of 93% meanwhile reducing the model size by 97% and reducing the inference time by 56% compared to the original dual-model.
引用
收藏
页码:135 / 143
页数:9
相关论文
共 38 条
  • [1] The Effectiveness of Depth Data in Liveness Face Authentication Using 3D Sensor Cameras
    Albakri, Ghazel
    Alghowinem, Sharifa
    [J]. SENSORS, 2019, 19 (08)
  • [2] [Anonymous], 2015, BMVC 2015 P BRIT MAC, DOI 10.5244/c.29.41
  • [3] Atoum Y, 2017, 2017 IEEE INTERNATIONAL JOINT CONFERENCE ON BIOMETRICS (IJCB), P319, DOI 10.1109/BTAS.2017.8272713
  • [4] OULU-NPU: A mobile face presentation attack database with real-world variations
    Boulkenafet, Zinelabinde
    Komulainen, Jukka
    Li, Lei
    Feng, Xiaoyi
    Hadid, Abdenour
    [J]. 2017 12TH IEEE INTERNATIONAL CONFERENCE ON AUTOMATIC FACE AND GESTURE RECOGNITION (FG 2017), 2017, : 612 - 618
  • [5] VGGFace2: A dataset for recognising faces across pose and age
    Cao, Qiong
    Shen, Li
    Xie, Weidi
    Parkhi, Omkar M.
    Zisserman, Andrew
    [J]. PROCEEDINGS 2018 13TH IEEE INTERNATIONAL CONFERENCE ON AUTOMATIC FACE & GESTURE RECOGNITION (FG 2018), 2018, : 67 - 74
  • [6] Chen DF, 2021, AAAI CONF ARTIF INTE, V35, P7028
  • [7] Attention-Based Two-Stream Convolutional Networks for Face Spoofing Detection
    Chen, Haonan
    Hu, Guosheng
    Lei, Zhen
    Chen, Yaowu
    Robertson, Neil M.
    Li, Stan Z.
    [J]. IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2020, 15 : 578 - 593
  • [8] Distilling Knowledge via Knowledge Review
    Chen, Pengguang
    Liu, Shu
    Zhao, Hengshuang
    Jia, Jiaya
    [J]. 2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, : 5006 - 5015
  • [9] Chingovska I., 2012, 2012 BIOSIG P INT C, P1
  • [10] On the Efficacy of Knowledge Distillation
    Cho, Jang Hyun
    Hariharan, Bharath
    [J]. 2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019), 2019, : 4793 - 4801