Face-Iris multimodal biometric recognition system based on deep learning

被引:7
|
作者
Hattab, Abdessalam [1 ]
Behloul, Ali [1 ]
机构
[1] Batna 2 Univ, Dept Comp Sci, LaSTIC Lab, 53 Constantine Rd, Fesdis 05078, Batna, Algeria
关键词
Multimodal biometric system; Face recognition; Iris recognition; Deep Learning (DL); Transfer Learning (TL); Convolutional Neural Network (CNN); NEURAL-NETWORKS;
D O I
10.1007/s11042-023-17337-y
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
With the increasing demand for user recognition in several recent applications, experts highly recommend using biometric identification technology in application development. However, using only single biometric modalities like face and fingerprint has proven to be insufficient to meet the high-security requirements of many sensitive military and government applications that are used at critical access points. Therefore, multimodal systems have gained increasing attention to overcome many limitations and problems affecting unimodal biometric systems' reliability and performance. In this research paper, we have proposed a robust multimodal biometric recognition system based on the fusion of the face and both irises modalities. Our proposed system used YOLOv4-tiny to detect regions of interest and a new effective Deep Learning model inspired by the Xception pre-trained model to extract features. Also, to keep the permanent features, we used Principal Component Analysis, and for classification, we applied the LinearSVC. In addition, we explore the performance of different fusion approaches, including image-level fusion, feature-level fusion, and two score-level fusion methods. To demonstrate the robustness and effectiveness of our proposed multimodal biometric recognition system, we used the two-fold cross-validation protocol during the evaluation process. Remarkably, our system achieved a perfect accuracy rate of 100% on the CASIA-ORL and SDUMLA-HMT multimodal databases, indicating its exceptional performance and reliability.
引用
收藏
页码:43349 / 43376
页数:28
相关论文
共 50 条
  • [31] Manifold Learning of Overcomplete Feature Spaces in a Multimodal Biometric Recognition System of Iris and Palmprint
    Naderi, Habibeh
    Soleimani, Behrouz Haji
    Matwin, Stan
    2017 14TH CONFERENCE ON COMPUTER AND ROBOT VISION (CRV 2017), 2017, : 191 - 196
  • [32] Chaotic Krill Herd with Deep Transfer Learning-Based Biometric Iris Recognition System
    Al-Mahafzah, Harbi
    AbuKhalil, Tamer
    Alqaralleh, Bassam A. Y.
    CMC-COMPUTERS MATERIALS & CONTINUA, 2022, 73 (03): : 5703 - 5715
  • [33] Deep Learning Based Iris Recognition System
    Prasad, Puja S.
    Gunjan, Vinit Kumar
    HELIX, 2018, 8 (04): : 3567 - 3571
  • [34] A Modified Chaotic Binary Particle Swarm Optimization Scheme and Its Application in Face-Iris Multimodal Biometric Identification
    Xiong, Qi
    Zhang, Xinman
    Xu, Xuebin
    He, Shaobo
    ELECTRONICS, 2021, 10 (02) : 1 - 17
  • [35] A Multimodal Biometric Recognition System Based on Fusion of Palmprint, Fingerprint and Face
    Chaudhary, Sheetal
    Nath, Rajender
    2009 INTERNATIONAL CONFERENCE ON ADVANCES IN RECENT TECHNOLOGIES IN COMMUNICATION AND COMPUTING (ARTCOM 2009), 2009, : 596 - 600
  • [36] Cascade-based Multimodal Biometric Recognition System with Fingerprint and Face
    Singh, Pradeep Kumar
    Sharma, Pankaj
    MACROMOLECULAR SYMPOSIA, 2021, 397 (01)
  • [37] Implementation of a multiple biometric identification system based on face, fingerprints and iris recognition
    Stroica, Petre
    Vladescu, Marian
    ADVANCED TOPICS IN OPTOELECTRONICS, MICROELECTRONICS, AND NANOTECHNOLOGIES VI, 2012, 8411
  • [39] Multimodal Biometric Recognition using Iris & Fingerprint
    Bharadi, Vinayak Ashok
    Pandya, Bhavesh
    Nemade, Bhushan
    2014 5TH INTERNATIONAL CONFERENCE CONFLUENCE THE NEXT GENERATION INFORMATION TECHNOLOGY SUMMIT (CONFLUENCE), 2014, : 697 - 702
  • [40] A multimodal biometric system for personal verification based on different level fusion of iris and face traits
    Alay, Nada
    Al-Baity, Heyam H.
    BIOSCIENCE BIOTECHNOLOGY RESEARCH COMMUNICATIONS, 2019, 12 (03): : 565 - 576