VG-DropDNet a Robust Architecture for Blood Vessels Segmentation on Retinal Image

被引:6
作者
Desiani, Anita [1 ]
Erwin [2 ]
Suprihatin, Bambang [1 ]
Efriliyanti, Filda [3 ]
Arhami, Muhammad [4 ]
Setyaningsih, Emy [5 ]
机构
[1] Univ Sriwijaya, Math & Nat Sci Fac, Math Dept, Indralaya 30662, Indonesia
[2] Univ Sriwijaya, Comp Sci Fac, Comp Engn Dept, Indralaya 30662, Indonesia
[3] Univ Sriwijaya, Math & Nat Sci Fac, Computat Lab, Indralaya 30662, Indonesia
[4] Politekn Negeri Lhokseumawe, Informat Engn Dept, Lhokseumawe 24375, Indonesia
[5] Inst Sains & Teknol Akprind, Comp Syst Dept, Yogyakarta 55222, Indonesia
关键词
Computer architecture; Image segmentation; Retina; Blood vessels; Sensitivity; Neurons; Medical diagnostic imaging; DenseNet; retinal image; segmentation; U-Net; VG-DropDNet; CONDITIONAL RANDOM-FIELD; NEURAL-NETWORK; AUTOMATIC SEGMENTATION; MODELS;
D O I
10.1109/ACCESS.2022.3202890
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Additional layers to the U-Net architecture leads to additional parameters and network complexity. The Visual Geometry Group (VGG) architecture with 16 backbones can overcome the problem with small convolutions. Dense Connected (DenseNet) can be used to avoid excessive feature learning in VGG by directly connecting each layer using input from the previous feature map. Adding a Dropout layer can protect DenseNet from Overfitting problems. This study proposes a VG-DropDNet architecture that combines VGG, DenseNet, and U-Net with a dropout layer in blood vessels retinal segmentation. VG-DropDNet is applied to Digital Retina Image for Vessel Extraction (DRIVE) and Retina Structured Analysis (STARE) datasets. The results on DRIVE give great accuracy of 95.36%, sensitivity of 79.74% and specificity of 97.61%. The F1-score on DRIVE of 0.8144 indicates that VG-DropDNet has great precision and recall. The IoU result is 68.70. It concludes that the resulting image of VG-DropDNet has a great resemblance to its ground truth. The results on STARE are excellent for accuracy of 98.56%, sensitivity of 91.24%, specificity of 92.99% and IoU of 86.90%. The results of the VGG-DropDNet on STARE show that the proposed method is excellent and robust for blood vessels retinal segmentation. The Cohen's Kappa coefficient obtained by VG-DropDNet at DRIVe is 0.8386 and at STARE is 0.98, it explains that the VG-DropDNet results are consistent and precise in both datasets. The results on various datasets indicate that VG-DropDnet is effective, robust and stable in retinal image blood vessel segmentation.
引用
收藏
页码:92067 / 92083
页数:17
相关论文
共 73 条
[1]   HTTU-Net: Hybrid Two Track U-Net for Automatic Brain Tumor Segmentation [J].
Aboelenein, Nagwa M. ;
Piao Songhao ;
Koubaa, Anis ;
Noor, Alam ;
Afifi, Ahmed .
IEEE ACCESS, 2020, 8 :101406-101415
[2]   CMM-Net: Contextual multi-scale multi-level network for efficient biomedical image segmentation [J].
Al-masni, Mohammed A. ;
Kim, Dong-Hyun .
SCIENTIFIC REPORTS, 2021, 11 (01)
[3]  
Albawi S, 2017, I C ENG TECHNOL
[4]   Recurrent residual U-Net for medical image segmentation [J].
Alom, Md Zahangir ;
Yakopcic, Chris ;
Hasan, Mahmudul ;
Taha, Tarek M. ;
Asari, Vijayan K. .
JOURNAL OF MEDICAL IMAGING, 2019, 6 (01)
[5]  
[Anonymous], 2017, 34 INT S AUTOMATION, DOI DOI 10.22260/ISARC2017/0066
[6]  
Atapour-Abarghouei A, 2019, IEEE IMAGE PROC, P4295, DOI [10.1109/ICIP.2019.8803551, 10.1109/icip.2019.8803551]
[7]   A Review on the Strategies and Techniques of image Segmentation [J].
Bali, Akanksha ;
Singh, Shailendra Narayan .
2015 5TH INTERNATIONAL CONFERENCE ON ADVANCED COMPUTING & COMMUNICATION TECHNOLOGIES ACCT 2015, 2015, :113-120
[8]   PCAT-UNet: UNet-like network fused convolution and transformer for retinal vessel segmentation [J].
Chen, Danny ;
Yang, Wenzhong ;
Wang, Liejun ;
Tan, Sixiang ;
Lin, Jiangzhaung ;
Bu, Wenxiu .
PLOS ONE, 2022, 17 (01)
[9]   DRINet for Medical Image Segmentation [J].
Chen, Liang ;
Bentley, Paul ;
Mori, Kensaku ;
Misawa, Kazunari ;
Fujiwara, Michitaka ;
Rueckert, Daniel .
IEEE TRANSACTIONS ON MEDICAL IMAGING, 2018, 37 (11) :2453-2462
[10]   Fully automatic knee osteoarthritis severity grading using deep neural networks with a novel ordinal loss [J].
Chen, Pingjun ;
Gao, Linlin ;
Shi, Xiaoshuang ;
Allen, Kyle ;
Yang, Lin .
COMPUTERIZED MEDICAL IMAGING AND GRAPHICS, 2019, 75 :84-92