Robust Deep Identification using ECG and Multimodal Biometrics for Industrial Internet of Things

被引:22
作者
Al Alkeem, Ebrahim [1 ,2 ]
Yeun, Chan Yeob [1 ]
Yun, Jaewoong [3 ]
Yoo, Paul D. [4 ]
Chae, Myungsu [3 ]
Rahman, Arafatur [5 ]
Asyhari, A. Taufiq [6 ]
机构
[1] Khalifa Univ, Ctr Cyber Phys Syst, EECS Dept, Abu Dhabi, U Arab Emirates
[2] Nawah Energy Co, Secur & Safety, Abu Dhabi, U Arab Emirates
[3] NOTA Inc, Res Inst, Daejeon, South Korea
[4] Univ London, Birkbeck Coll, CSIS, Mulet St, London WC1E 7HX, England
[5] Univ Malaysia Pahang, Fac Comp, Pahang, Malaysia
[6] Birmingham City Univ, CDT, Birmingham B4 7XG, W Midlands, England
关键词
Personal identification; multimodal biometrics; deep learning; gender classification; electrocardiogram; fingerprint; face recognition; feature-level fusion; RECOGNITION; FUSION;
D O I
10.1016/j.adhoc.2021.102581
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
The use of electrocardiogram (ECG) data for personal identification in Industrial Internet of Things can achieve near-perfect accuracy in an ideal condition. However, real-life ECG data are often exposed to various types of noises and interferences. A reliable and enhanced identification method could be achieved by employing additional features from other biometric sources. This work, thus, proposes a novel robust and reliable identification technique grounded on multimodal biometrics, which utilizes deep learning to combine fingerprint, ECG and facial image data, particularly useful for identification and gender classification purposes. The multimodal approach allows the model to deal with a range of input domains removing the requirement of independent training on each modality, and inter-domain correlation can improve the model generalization capability on these tasks. In multitask learning, losses from one task help to regularize others, thus, leading to better overall performances. The proposed approach merges the embedding of multimodality by using feature-level and score level fusions. To the best of our understanding, the key concepts presented herein is a pioneering work combining multimodality, multitasking and different fusion methods. The proposed model achieves a better generalization on the benchmark dataset used while the feature-level fusion outperforms other fusion methods. The proposed model is validated on noisy and incomplete data with missing modalities and the analyses on the experimental results are provided.
引用
收藏
页数:13
相关论文
共 66 条
[21]   Human identification by quantifying similarity and dissimilarity in electrocardiogram phase space [J].
Fang, Shih-Chin ;
Chan, Hsiao-Lung .
PATTERN RECOGNITION, 2009, 42 (09) :1824-1831
[22]  
Gahi Y., 2008, 2008 New Technologies, Mobility and Security, P1, DOI DOI 10.1109/NTMS.2008.ECP.29
[23]  
강경우, 2012, [Computer and Information, 전자공학회논문지 - CI], V49, P1
[24]  
Gavrilova M.L., 2013, INFORM SCI REFERENCE
[25]   PhysioBank, PhysioToolkit, and PhysioNet - Components of a new research resource for complex physiologic signals [J].
Goldberger, AL ;
Amaral, LAN ;
Glass, L ;
Hausdorff, JM ;
Ivanov, PC ;
Mark, RG ;
Mietus, JE ;
Moody, GB ;
Peng, CK ;
Stanley, HE .
CIRCULATION, 2000, 101 (23) :E215-E220
[26]  
Hinton G., 2012, COURS VID LECT
[27]   Deep Neural Networks for Acoustic Modeling in Speech Recognition [J].
Hinton, Geoffrey ;
Deng, Li ;
Yu, Dong ;
Dahl, George E. ;
Mohamed, Abdel-rahman ;
Jaitly, Navdeep ;
Senior, Andrew ;
Vanhoucke, Vincent ;
Patrick Nguyen ;
Sainath, Tara N. ;
Kingsbury, Brian .
IEEE SIGNAL PROCESSING MAGAZINE, 2012, 29 (06) :82-97
[28]  
HO TK, 1994, IEEE T PATTERN ANAL, V16, P66, DOI 10.1109/34.273716
[29]  
Hond D., 1997, BMVC
[30]   Face-iris multimodal biometric scheme based on feature level fusion [J].
Huo, Guang ;
Liu, Yuanning ;
Zhu, Xiaodong ;
Dong, Hongxing ;
He, Fei .
JOURNAL OF ELECTRONIC IMAGING, 2015, 24 (06)