D2SC-GAN: Dual deep-shallow channeled generative adversarial network, for resolving low-resolution faces for recognition in classroom scenarios

被引:5
作者
Bhattacharjee A. [1 ]
Das S. [1 ]
机构
[1] Visualization and Perception Laboratory, Department of Computer Science and Engineering, IIT Madras, Chennai
来源
IEEE Transactions on Biometrics, Behavior, and Identity Science | 2020年 / 2卷 / 03期
关键词
Classroom FR; Convolutional neural networks; Face generation; Face recognition; Generative adversarial networks;
D O I
10.1109/TBIOM.2020.2983524
中图分类号
学科分类号
摘要
Face Recognition using convolutional neural networks have achieved considerable success in constrained environments in the recent past. However, the performance of these methods deteriorates in case of mismatch of training and test distributions, under classroom/surveillance scenarios. These test (probe) samples suffer from degradations such as noise, poor illumination, pose variations, occlusion, low-resolution (LR), blur as well as aliasing, when compared to the crisp, rich training (gallery) set, comprising mostly of high-resolution (HR) mugshot images captured in laboratory settings. To cope with this scenario, we propose a novel dual deep-shallow channeled generative adversarial network (D2SC-GAN) which performs supervised domain adaptation (DA) by mapping LR degraded probe samples to their corresponding HR gallery-like counterparts to perform closed-set face recognition. D2SC-GAN uses a multi-component loss function comprising of multi-resolution patchwise MSE and normalized chi-squared distance loss functions, along with a Kullback-Leibler divergence based loss function. Moreover, we propose a novel classroom face dataset called the Indian Classroom Face Dataset (ICFD), which, to the best of our knowledge, is a first of its kind and will be helpful to explore the challenges of face recognition when used for automatically recording the attendance in classroom conditions. The proposed network achieves superior results on five real-world face datasets when compared with recent state-of-the-art deep as well as shallow supervised domain adaptation (DA), super-resolution (SR), and degraded face recognition (DFR) methods, which show the effectiveness of our proposed method. © 2020 IEEE.
引用
收藏
页码:223 / 234
页数:11
相关论文
共 56 条
[1]  
Georghiades A.S., Belhumeur P.N., Kriegman D.J., From few to many: Illumination cone models for face recognition under variable lighting and pose, IEEE Trans. Pattern Anal. Mach. Intell., 23, 6, pp. 643-660, (2001)
[2]  
Ahonen T., Hadid A., Pietikainen M., Face description with local binary patterns: Application to face recognition, IEEE Trans. Pattern Anal. Mach. Intell., 28, 12, pp. 2037-2041, (2006)
[3]  
Zhao W.-Y., Chellappa R., Phillips P.J., Rosenfeld A., Face recognition: A literature survey, ACM Comput. Surveys, 35, 4, pp. 399-458, (2003)
[4]  
Punnappurath A., Rajagopalan A.N., Taheri S., Chellappa R., Seetharaman G., Face recognition across non-uniform motion blur, illumination, and pose, IEEE Trans. Image Process., 24, 7, pp. 2067-2082, (2015)
[5]  
Masi I., Wu Y., Hassner T., Natarajan P., Deep face recognition: A survey, Proc. IEEE SIBGRAPI Conf. Graph. Patterns Images (SIBGRAPI), pp. 471-478, (2018)
[6]  
Bhattacharjee A., Banerjee S., Das S., PosIX-GAN: Generating multiple poses using gan for pose-invariant face recognition, Proc. Eur. Conf. Comput. Vis. Workshops (ECCVW), pp. 427-443, (2018)
[7]  
Banerjee S., Das S., Domain Adaptation with Soft-Margin Multiple Feature-Kernel Learning Beats Deep Learning for Surveillance Face Recognition, (2016)
[8]  
Banerjee S., Das S., Mutual variation of information on transfer-CNN for face recognition with degraded probe samples, Neurocomputing, 310, pp. 299-315, (2018)
[9]  
Banerjee S., Das S., Lr-gan for degraded face recognition, Pattern Recognit. Lett., 116, pp. 246-253, (2018)
[10]  
Choi Y., Choi M.-J., Kim M., Ha J.-W., Kim S., Choo J., StarGAN: Unified generative adversarial networks for multi-domain image-to-image translation, Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8789-8797, (2018)