Twin Deep Convolutional Neural Network-based Cross-spectral Periocular Recognition

被引:9
作者
Behera, Sushree S. [1 ]
Mandal, Bappaditya [2 ]
Puhan, Niladri B. [1 ]
机构
[1] Indian Inst Technol, Sch Elect Sci, Bhubaneswar, India
[2] Keele Univ, Sch Comp & Math, Newcastle, England
来源
2020 TWENTY SIXTH NATIONAL CONFERENCE ON COMMUNICATIONS (NCC 2020) | 2020年
关键词
convolutional neural network; cross-spectral; deep learning; heterogeneous; periocular recognition;
D O I
10.1109/ncc48643.2020.9056008
中图分类号
TN [电子技术、通信技术];
学科分类号
0809 ;
摘要
Recognition of individuals using periocular information has received significant importance due to its advantages over other biometric traits such as face and iris in challenging scenarios where it is difficult to acquire either full facial region or iris images. Recent surveillance applications give rise to a challenging research problem where individuals are recognized in cross-spectral environments in which a probe infra-red (IR) image is matched with a gallery of visible (VIS) images and vice versa. Cross-spectral recognition has been studied mostly for face and iris traits over the past few years; however, the performance of periocular biometric in the cross-spectral domain still needs to be improved. In this paper, we propose a twin deep convolutional neural network (TCNN) with shared parameters to match VIS periocular images with those of near IR (NIR) ones. The proposed TCNN finds the similarity between the VIS and NIR image pairs applied at its input rather than classifying them into a certain class. The learning mechanism involved in this network is such that the distance between the images corresponding to the genuine pairs is reduced and that of the imposter pairs is maximized Based on the experimental results and analysis on three publicly available cross-spectral periocular databases, the TCNN achieves the state-of-the-art recognition results.
引用
收藏
页数:6
相关论文
共 29 条
[1]   Cross-Spectral Periocular Recognition: A Survey [J].
Behera, S. S. ;
Mandal, Bappaditya ;
Puhan, N. B. .
EMERGING RESEARCH IN ELECTRONICS, COMPUTER SCIENCE AND TECHNOLOGY, ICERECT 2018, 2019, 545 :731-741
[2]  
Behera SS, 2017, 2017 IEEE INTERNATIONAL JOINT CONFERENCE ON BIOMETRICS (IJCB), P681, DOI 10.1109/BTAS.2017.8272757
[3]  
Bharadwaj Samarth., 2010, 2010 4 IEEE INT C BI, P1, DOI DOI 10.1109/BTAS.2010.5634498
[4]  
Bromley J., 1993, International Journal of Pattern Recognition and Artificial Intelligence, V7, P669, DOI 10.1142/S0218001493000339
[5]  
Cao ZCX, 2014, IEEE IMAGE PROC, P4967, DOI 10.1109/ICIP.2014.7026006
[6]   Learning a similarity metric discriminatively, with application to face verification [J].
Chopra, S ;
Hadsell, R ;
LeCun, Y .
2005 IEEE COMPUTER SOCIETY CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, VOL 1, PROCEEDINGS, 2005, :539-546
[7]  
Garg Rahul Kumar, 2018, 2018 8th International Conference on Communication Systems and Network Technologies (CSNT), P1, DOI 10.1109/CSNT.2018.8820273
[8]   Deep Residual Learning for Image Recognition [J].
He, Kaiming ;
Zhang, Xiangyu ;
Ren, Shaoqing ;
Sun, Jian .
2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, :770-778
[9]   Wasserstein CNN: Learning Invariant Features for NIR-VIS Face Recognition [J].
He, Ran ;
Wu, Xiang ;
Sun, Zhenan ;
Tan, Tieniu .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2019, 41 (07) :1761-1773
[10]   The CASIA NIR-VIS 2.0 Face Database [J].
Li, Stan Z. ;
Yi, Dong ;
Lei, Zhen ;
Liao, Shengcai .
2013 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS (CVPRW), 2013, :348-353