Deep Convolutional Neural Networks for Unconstrained Ear Recognition

被引:40
作者
Alshazly, Hammam [1 ,2 ]
Linse, Christoph [1 ]
Barth, Erhardt [1 ]
Martinetz, Thomas [1 ]
机构
[1] Univ Lubeck, Inst Neuro & Bioinformat, D-23562 Lubeck, Germany
[2] South Valley Univ, Dept Math, Fac Sci, Qena 83523, Egypt
来源
IEEE ACCESS | 2020年 / 8卷 / 08期
关键词
Ear; Feature extraction; Image recognition; Training; Visualization; Task analysis; Lighting; Ear recognition; biometrics; deep learning; convolutional neural networks; transfer learning; feature visualization; TECHNOLOGY; FUSION;
D O I
10.1109/ACCESS.2020.3024116
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
This paper employs state-of-the-art Deep Convolutional Neural Networks (CNNs), namely AlexNet, VGGNet, Inception, ResNet and ResNeXt in a first experimental study of ear recognition on the unconstrained EarVN1.0 dataset. As the dataset size is still insufficient to train deep CNNs from scratch, we utilize transfer learning and propose different domain adaptation strategies. The experiments show that our networks, which are fine-tuned using custom-sized inputs determined specifically for each CNN architecture, obtain state-of-the-art recognition performance where a single ResNeXt101 model achieves a rank-1 recognition accuracy of 93.45%. Moreover, we achieve the best rank-1 recognition accuracy of 95.85% using an ensemble of fine-tuned ResNeXt101 models. In order to explain the performance differences between models and make our results more interpretable, we employ the t-SNE algorithm to explore and visualize the learned features. Feature visualizations show well-separated clusters representing ear images of the different subjects. This indicates that discriminative and ear-specific features are learned when applying our proposed learning strategies.
引用
收藏
页码:170295 / 170310
页数:16
相关论文
共 78 条
[1]  
Almisreb A. A., 2018, P 4 INT C INF RETR K, DOI [DOI 10.1109/INFRKM.2018.8464769, 10.1109/INFRKM.2018.8464769]
[2]   Handcrafted versus CNN Features for Ear Recognition [J].
Alshazly, Hammam ;
Linse, Christoph ;
Barth, Erhardt ;
Martinetz, Thomas .
SYMMETRY-BASEL, 2019, 11 (12)
[3]   Ensembles of Deep Learning Models and Transfer Learning for Ear Recognition [J].
Alshazly, Hammam ;
Linse, Christoph ;
Barth, Erhardt ;
Martinetz, Thomas .
SENSORS, 2019, 19 (19)
[4]   Ear Biometric Recognition Using Gradient-Based Feature Descriptors [J].
Alshazly, Hammam A. ;
Hassaballah, M. ;
Ahmed, Mourad ;
Ali, Abdelmgeid A. .
PROCEEDINGS OF THE INTERNATIONAL CONFERENCE ON ADVANCED INTELLIGENT SYSTEMS AND INFORMATICS 2018, 2019, 845 :435-445
[5]  
Azizpour Hossein, 2015, 2015 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), P36, DOI 10.1109/CVPRW.2015.7301270
[6]  
Bargal Sarah Adel, 2015, 10th International Conference on Computer Vision Theory and Applications (VISAPP 2015). Proceedings, P171
[7]  
Barra Silvio, 2014, P SIGN IM PROC BIOM, P129
[8]   0 Experiments and improvements of ear recognition based on local texture descriptors [J].
Benzaoui, Amir ;
Adjabi, Insaf ;
Boukrouche, Abdelhani .
OPTICAL ENGINEERING, 2017, 56 (04)
[9]   Ear biometric recognition using local texture descriptors [J].
Benzaoui, Amir ;
Hadid, Abdenour ;
Boukrouche, Abdelhani .
JOURNAL OF ELECTRONIC IMAGING, 2014, 23 (05)
[10]   Toward Unconstrained Ear Recognition From Two-Dimensional Images [J].
Bustard, John D. ;
Nixon, Mark S. .
IEEE TRANSACTIONS ON SYSTEMS MAN AND CYBERNETICS PART A-SYSTEMS AND HUMANS, 2010, 40 (03) :486-494