FV-DGNN: A Distance-Based Graph Neural Network for Finger Vein Recognition

被引:5
作者
Chang, Jie [1 ]
Lai, Taotao [2 ]
Yang, Luokun [1 ]
Fang, Chang [3 ]
Li, Zuoyong [2 ]
Fujita, Hamido [4 ,5 ,6 ]
机构
[1] Wannan Med Coll, Dept Med Informat, Wuhu 240001, Peoples R China
[2] Minjiang Univ, Coll Comp & Control Engn, Fuzhou 350108, Peoples R China
[3] Yijishan Hosp, Wannan Med Coll, Med Informat Ctr, Wuhu 240001, Peoples R China
[4] Univ Teknol Malaysia, Malaysia Japan Int Inst Technol MJIIT, Kuala Lumpur 54100, Malaysia
[5] Univ Granada, Andalusian Res Inst Data Sci & Computat Intelligen, Granada 18010, Spain
[6] Iwate Prefectural Univ, Reg Res Ctr, Takizawa 0200693, Japan
基金
中国国家自然科学基金;
关键词
Convolutional autoencoder (CAE) architecture; depthwise separable convolution layer; distance distribution; finger vein recognition; graph neural network (GNN); FEATURE-EXTRACTION; FEATURES;
D O I
10.1109/TIM.2023.3301062
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
As a promising biometric identification technology, finger vein recognition has gained considerable attention in the field of information security due to its inherent advantages, such as living body recognition, noncontact operation, and high security. However, the existing models often focus on pairwise matching of low-contrast infrared finger vein images, overlooking the underlying relationships among the matching information. To address this limitation, we propose a graph neural network (GNN) model that captures the distance-based interrelation between multiple pairs of samples. Specifically, we design an architecture to obtain a binary finger vein mask image, which guides the model to capture high-level features of finger vein regions while ignoring noises behind nonfinger vein regions. Moreover, a distance-based GNN architecture, which models the distance distribution between multiple pairs of finger vein images by fusing the distance information propagated along edges, is proposed to determine the matching degree between each pair of images. Furthermore, to further expedite the proposed model in application, the depthwise separable convolution layer is adopted in the encoder component of a convolutional neural network (CNN) architecture to reduce the parameters significantly. Extensive experimental results on three public databases have verified the effectiveness of our proposed model.
引用
收藏
页数:11
相关论文
共 35 条
[11]   Convolutional Autoencoder Model for Finger-Vein Verification [J].
Hou, Borui ;
Yan, Ruqiang .
IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, 2020, 69 (05) :2067-2074
[12]   FVT: Finger Vein Transformer for Authentication [J].
Huang, Junduan ;
Luo, Weijian ;
Yang, Weili ;
Zheng, An ;
Lian, Fengzhao ;
Kang, Wenxiong .
IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, 2022, 71
[13]   Finger Vein Pulsation-Based Biometric Recognition [J].
Krishnan, Arya ;
Thomas, Tony ;
Mishra, Deepak .
IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2021, 16 :5034-5044
[14]   Human Identification Using Finger Images [J].
Kumar, Ajay ;
Zhou, Yingbo .
IEEE TRANSACTIONS ON IMAGE PROCESSING, 2012, 21 (04) :2228-2244
[15]  
Hamilton WL, 2018, Arxiv, DOI arXiv:1709.05584
[16]  
Li JH, 2019, PROCEEDINGS OF 2019 IEEE 8TH JOINT INTERNATIONAL INFORMATION TECHNOLOGY AND ARTIFICIAL INTELLIGENCE CONFERENCE (ITAIC 2019), P144, DOI [10.1109/itaic.2019.8785512, 10.1109/ITAIC.2019.8785512]
[17]   Focal Loss for Dense Object Detection [J].
Lin, Tsung-Yi ;
Goyal, Priya ;
Girshick, Ross ;
He, Kaiming ;
Dollar, Piotr .
2017 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2017, :2999-3007
[18]   Iris Feature-Based 3-D Gaze Estimation Method Using a One-Camera-One-Light-Source System [J].
Liu, Jiahui ;
Chi, Jiannan ;
Lu, Ning ;
Yang, Zuoyun ;
Wang, Zhiliang .
IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, 2020, 69 (07) :4940-4954
[19]  
Lowe D.G., 1999, Proceedings of the seventh IEEE international conference on computer vision, DOI DOI 10.1109/ICCV.1999.790410
[20]   Feature extraction of finger-vein patterns based on repeated line tracking and its application to personal identification [J].
Miura, N ;
Nagasaka, A ;
Miyatake, T .
MACHINE VISION AND APPLICATIONS, 2004, 15 (04) :194-203