Attention guided U-Net for accurate iris segmentation

被引:121
作者
Lian, Sheng [1 ]
Luo, Zhiming [2 ]
Zhong, Zhun [1 ]
Lin, Xiang [3 ,4 ]
Su, Songzhi [1 ]
Li, Shaozi [1 ]
机构
[1] Xiamen Univ, Dept Cognit Sci, Xiamen, Peoples R China
[2] Xiamen Univ, Postdoc Ctr Informat & Commun Engn, Xiamen, Peoples R China
[3] Xiamen Univ, Fujian Prov Key Lab Ophthalmol & Visual Sci, Xiamen, Peoples R China
[4] Xiamen Univ, Inst Eye, Xiamen, Peoples R China
基金
中国国家自然科学基金;
关键词
Iris segmentation; U-Net; Attention; RECOGNITION; FEATURES;
D O I
10.1016/j.jvcir.2018.10.001
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Iris segmentation is a critical step for improving the accuracy of iris recognition, as well as for medical concerns. Existing methods generally use whole eye images as input for network learning, which do not consider the geometric constrain that iris only occur in a specific area in the eye. As a result, such methods can be easily affected by irrelevant noisy pixels outside iris region. In order to address this problem, we propose the ATTention U-Net (Arf-UNet) which guides the model to learn more discriminative features for separating the iris and non-iris pixels. The ATT-UNet firstly regress a bounding box of the potential iris region and generated an attention mask. Then, the mask is used as a weighted function to merge with discriminative feature maps in the model, making segmentation model pay more attention to iris region. We implement our approach on UBIRIS.v2 and CASIA.IrisV4-distance, and achieve mean error rates of 0.76% and 0.38%, respectively. Experimental results show that our method achieves consistent improvement in both visible wavelength and near-infrared iris images with challenging scenery, and surpass other representative iris segmentation approaches. (C) 2018 Elsevier Inc. All rights reserved.
引用
收藏
页码:296 / 304
页数:9
相关论文
共 44 条
[1]  
[Anonymous], 2018, DOI IEEE ACCESS, DOI [DOI 10.1109/AC, DOI 10.1109/ACCESS.2017.2784352]
[2]  
[Anonymous], 2012, Advances in neural information processing systems, DOI DOI 10.5555/2999325.2999452
[3]   GENERALIZING THE HOUGH TRANSFORM TO DETECT ARBITRARY SHAPES [J].
BALLARD, DH .
PATTERN RECOGNITION, 1981, 13 (02) :111-122
[4]  
Bazrafkan S., 2017, END END DEEP NEURAL
[5]  
Bowyer K. W., 2016, Handbook of iris recognition
[6]   Iris Recognition Based on Human-Interpretable Features [J].
Chen, Jianxu ;
Shen, Feng ;
Chen, Danny Ziyi ;
Flynn, Patrick J. .
IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2016, 11 (07) :1476-1485
[7]  
Cicek O., 2016, INT C MED IM COMP CO, P424, DOI [10.1007/978-3-319-46723-8_49, DOI 10.1007/978-3-319-46723-8_49]
[8]  
Cire\csan D.C., 2011, P 22 INT JOINT C ART, VTwo, P1237
[9]  
Daugman J.G., 1994, U.S. Patent, Patent No. 5291560
[10]   HIGH CONFIDENCE VISUAL RECOGNITION OF PERSONS BY A TEST OF STATISTICAL INDEPENDENCE [J].
DAUGMAN, JG .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 1993, 15 (11) :1148-1161