Modified Anam-Net Based Lightweight Deep Learning Model for Retinal Vessel Segmentation

被引:10
作者
Haider, Syed Irtaza [1 ]
Aurangzeb, Khursheed [2 ]
Alhussein, Musaed [2 ]
机构
[1] King Saud Univ, Coll Comp & Informat Sci, Riyadh 11543, Saudi Arabia
[2] King Saud Univ, Dept Comp Engn, Coll Comp & Informat Sci, Riyadh 11543, Saudi Arabia
来源
CMC-COMPUTERS MATERIALS & CONTINUA | 2022年 / 73卷 / 01期
关键词
Anam-Net; convolutional neural network; cross-database training; data augmentation; deep learning; fundus images; retinal vessel segmentation; semantic segmentation; CONDITIONAL RANDOM-FIELD; NEURAL-NETWORKS; IMAGES;
D O I
10.32604/cmc.2022.025479
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
The accurate segmentation of retinal vessels is a challenging task due to the presence of various pathologies as well as the low-contrast of thin vessels and non-uniform illumination. In recent years, encoder-decoder networks have achieved outstanding performance in retinal vessel segmentation at the cost of high computational complexity. To address the aforementioned challenges and to reduce the computational complexity, we propose a lightweight convolutional neural network (CNN)-based encoder-decoder deep learning model for accurate retinal vessels segmentation. The proposed deep learning model consists of encoder-decoder architecture along with bottleneck layers that consist of depth-wise squeezing, followed by full-convolution, and finally depth-wise stretching. The inspiration for the proposed model is taken from the recently developed Anam-Net model, which was tested on CT images for COVID-19 identification. For our lightweight model, we used a stack of two 3 x 3 convolution layers (without spatial pooling in between) instead of a single 3 x 3 convolution layer as proposed in Anam-Net to increase the receptive field and to reduce the trainable parameters. The proposed method includes fewer filters in all convolutional layers than the original Anam-Net and does not have an increasing number of filters for decreasing resolution. These modifications do not compromise on the segmentation accuracy, but they do make the architecture significantly lighter in terms of the number of trainable parameters and computation time. The proposed architecture has comparatively fewer parameters (1.01M) than Anam-Net (4.47M), U-Net (31.05M), SegNet (29.50M), and most of the other recent works. The proposed model does not require any problem-specific pre- or post-processing, nor does it rely on handcrafted features. In addition, the attribute of being efficient in terms of segmentation accuracy as well as lightweight makes the proposed method a suitable candidate to be used in the screening platforms at the point of care. We evaluated our proposed model on open-access datasets namely, DRIVE, STARE, and CHASE_DB. The experimental results show that the proposed model outperforms several state-of-the-art methods, such as U-Net and its variants, fully convolutional network (FCN), SegNet, CCNet, ResWNet, residual connection-based encoder-decoder network (RCED-Net), and scale-space approx. network (SSANet) in terms of {dice coefficient, sensitivity (SN), accuracy (ACC), and the area under the ROC curve (AUC)} with the scores of {0.8184, 0.8561, 0.9669, and 0.9868} on the DRIVE dataset, the scores of {0.8233, 0.8581, 0.9726, and 0.9901} on the STARE dataset, and the scores of {0.8138, 0.8604, 0.9752, and 0.9906} on the CHASE_DB dataset. Additionally, we perform cross-training experiments on the DRIVE and STARE datasets. The result of this experiment indicates the generalization ability and robustness of the proposed model.
引用
收藏
页码:1501 / 1526
页数:26
相关论文
共 49 条
[1]  
Abbas W., 2019, PROC NEURAL INFORM P
[2]   Automated Analysis of Retinal Images for Detection of Referable Diabetic Retinopathy [J].
Abramoff, Michael D. ;
Folk, James C. ;
Han, Dennis P. ;
Walker, Jonathan D. ;
Williams, David F. ;
Russell, Stephen R. ;
Massin, Pascale ;
Cochener, Beatrice ;
Gain, Philippe ;
Tang, Li ;
Lamard, Mathieu ;
Moga, Daniela C. ;
Quellec, Gwenole ;
Niemeijer, Meindert .
JAMA OPHTHALMOLOGY, 2013, 131 (03) :351-357
[3]   Retinal Vessels Segmentation Techniques and Algorithms: A Survey [J].
Almotiri, Jasem ;
Elleithy, Khaled ;
Elleithy, Abdelrahman .
APPLIED SCIENCES-BASEL, 2018, 8 (02)
[4]   Recurrent residual U-Net for medical image segmentation [J].
Alom, Md Zahangir ;
Yakopcic, Chris ;
Hasan, Mahmudul ;
Taha, Tarek M. ;
Asari, Vijayan K. .
JOURNAL OF MEDICAL IMAGING, 2019, 6 (01)
[5]  
Anuradha N, 2015, Oman J Ophthalmol, V8, P28, DOI 10.4103/0974-620X.149861
[6]   Optical coherence tomography angiography in dry age-related macular degeneration [J].
Cicinelli, Maria Vittoria ;
Rabiolo, Alessandro ;
Sacconi, Riccardo ;
Carnevali, Adriano ;
Querques, Lea ;
Bandello, Francesco ;
Querques, Giuseppe .
SURVEY OF OPHTHALMOLOGY, 2018, 63 (02) :236-244
[7]   CcNet: A cross-connected convolutional network for segmenting retinal vessels using multi-scale features [J].
Feng, Shouting ;
Zhuo, Zhongshuo ;
Pan, Daru ;
Tian, Qi .
NEUROCOMPUTING, 2020, 392 :268-276
[8]   Diagnosis of Diabetic Retinopathy Using Deep Netural Networks [J].
Gao, Zhentao ;
Li, Jie ;
Guo, Jixiang ;
Chen, Yuanyuan ;
Yi, Zhang ;
Zhong, Jie .
IEEE ACCESS, 2019, 7 :3360-3370
[9]   A new deep learning method for blood vessel segmentation in retinal images based on convolutional kernels and modified U-Net model [J].
Gegundez-Arias, Manuel E. ;
Marin-Santos, Diego ;
Perez-Borrero, Isaac ;
Vasallo-Vazquez, Manuel J. .
COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE, 2021, 205
[10]   Retinal blood vessel segmentation approach based on mathematical morphology [J].
Hassan, Gehad ;
El-Bendary, Nashwa ;
Hassanien, Aboul Ella ;
Fahmy, Ali ;
Shoeb, Abullah M. ;
Snasel, Vaclav .
INTERNATIONAL CONFERENCE ON COMMUNICATIONS, MANAGEMENT, AND INFORMATION TECHNOLOGY (ICCMIT'2015), 2015, 65 :612-622