Training Deep Face Recognition for Efficient Inference by Distillation and Mutual Learning

被引:0
作者
Shen, Guodong [1 ]
Shen, Yao [1 ]
RiaZ, M. Naveed [1 ]
机构
[1] Shanghai Jiao Tong Univ, Dept Comp Sci & Engn, Shanghai, Peoples R China
来源
PROCEEDINGS OF THE 2018 IEEE INTERNATIONAL CONFERENCE ON PROGRESS IN INFORMATICS AND COMPUTING (PIC) | 2018年
关键词
face recognition; model distillation; mutual learning; deep learning;
D O I
暂无
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Currently most of deep face recognition algorithms utilize heavy networks to achieve the state-of-the-art performance. In most scenarios, the more challenging task is to achieve the relative high accuracy with low computational cost especially for embedded devices. In this paper, we propose a lightweight network for face recognition using distillation and deep mutual learning In proposed methods a new indicator is designed to monitor the model convergence and an assessment criteria is developed to evaluate the Labeled Faces in the Wild(LFW) dataset. Experiments show that our models work better than networks trained directly and other mobile face recognition solutions.
引用
收藏
页码:38 / 43
页数:6
相关论文
共 28 条
[1]  
[Anonymous], 2014, CoRR
[2]  
[Anonymous], 2018, Arcface: Additive angular margin loss for deep face recognition
[3]   Deep Residual Learning for Image Recognition [J].
He, Kaiming ;
Zhang, Xiangyu ;
Ren, Shaoqing ;
Sun, Jian .
2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, :770-778
[4]  
Hinton G., 2015, arXiv, V14, P38
[5]  
Howard AG, 2017, ARXIV
[6]  
Huang G. B., 2008, LABELED FACES WILD D
[7]  
Iandola FN, 2016, ARXIV, DOI 10.1007/978-3-319-24553-9
[8]  
Jin J., 2014, Flattened Convolutional Neural Networks for Feedforward Acceleration
[9]  
Kasabov N, 2016, 2016 IEEE 8TH INTERNATIONAL CONFERENCE ON INTELLIGENT SYSTEMS (IS), P15, DOI 10.1109/IS.2016.7737434
[10]   ImageNet Classification with Deep Convolutional Neural Networks [J].
Krizhevsky, Alex ;
Sutskever, Ilya ;
Hinton, Geoffrey E. .
COMMUNICATIONS OF THE ACM, 2017, 60 (06) :84-90