Accurate Retinal Vessel Segmentation in Color Fundus Images via Fully Attention-Based Networks

被引:55
作者
Li, Kaiqi [1 ]
Qi, Xingqun [1 ]
Luo, Yiwen [1 ]
Yao, Zeyi [1 ]
Zhou, Xiaoguang [1 ]
Sun, Muyi [1 ,2 ]
机构
[1] Beijing Univ Posts & Telecommun, Sch Automat, Beijing 100876, Peoples R China
[2] Chinese Acad Sci, Inst Automat, Natl Lab Pattern Recognit, Ctr Res Intelligent Percept & Comp, Beijing, Peoples R China
关键词
Image segmentation; Retinal vessels; Semantics; Feature extraction; Biomedical imaging; Task analysis; Attention mechanism; deep learning; image processing; retinal vessel segmentation; BLOOD-VESSELS;
D O I
10.1109/JBHI.2020.3028180
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Automatic retinal vessel segmentation is important for the diagnosis and prevention of ophthalmic diseases. The existing deep learning retinal vessel segmentation models always treat each pixel equally. However, the multi-scale vessel structure is a vital factor affecting the segmentation results, especially in thin vessels. To address this crucial gap, we propose a novel Fully Attention-based Network (FANet) based on attention mechanisms to adaptively learn rich feature representation and aggregate the multi-scale information. Specifically, the framework consists of the image pre-processing procedure and the semantic segmentation networks. Green channel extraction (GE) and contrast limited adaptive histogram equalization (CLAHE) are employed as pre-processing to enhance the texture and contrast of retinal blood images. Besides, the network combines two types of attention modules with the U-Net. We propose a lightweight dual-direction attention block to model global dependencies and reduce intra-class inconsistencies, in which the weights of feature maps are updated based on the semantic correlation between pixels. The dual-direction attention block utilizes horizontal and vertical pooling operations to produce the attention map. In this way, the network aggregates global contextual information from semantic-closer regions or a series of pixels belonging to the same object category. Meanwhile, we adopt the selective kernel (SK) unit to replace the standard convolution for obtaining multi-scale features of different receptive field sizes generated by soft attention. Furthermore, we demonstrate that the proposed model can effectively identify irregular, noisy, and multi-scale retinal vessels. The abundant experiments on DRIVE, STARE, and CHASE_DB1 datasets show that our method achieves state-of-the-art performance.
引用
收藏
页码:2071 / 2081
页数:11
相关论文
共 57 条
[11]  
Elson J, 2017, 2017 INTERNATIONAL CONFERENCE ON INTELLIGENT COMPUTING, INSTRUMENTATION AND CONTROL TECHNOLOGIES (ICICICT), P775, DOI 10.1109/ICICICT1.2017.8342662
[12]  
Feng ZW, 2017, IEEE IMAGE PROC, P1742, DOI 10.1109/ICIP.2017.8296580
[13]   An approach to localize the retinal blood vessels using bit planes and centerline detection [J].
Fraz, M. M. ;
Barman, S. A. ;
Remagnino, P. ;
Hoppe, A. ;
Basit, A. ;
Uyyanonvara, B. ;
Rudnicka, A. R. ;
Owen, C. G. .
COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE, 2012, 108 (02) :600-616
[14]   An Ensemble Classification-Based Approach Applied to Retinal Blood Vessel Segmentation [J].
Fraz, Muhammad Moazam ;
Remagnino, Paolo ;
Hoppe, Andreas ;
Uyyanonvara, Bunyarit ;
Rudnicka, Alicja R. ;
Owen, Christopher G. ;
Barman, Sarah A. .
IEEE TRANSACTIONS ON BIOMEDICAL ENGINEERING, 2012, 59 (09) :2538-2548
[15]   Dual Attention Network for Scene Segmentation [J].
Fu, Jun ;
Liu, Jing ;
Tian, Haijie ;
Li, Yong ;
Bao, Yongjun ;
Fang, Zhiwei ;
Lu, Hanqing .
2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, :3141-3149
[16]  
Hajabdollahi M, 2018, IEEE IMAGE PROC, P2785, DOI 10.1109/ICIP.2018.8451665
[17]  
He Kaiming, 2015, C COMP VIS PATT REC
[18]   Locating blood vessels in retinal images by piecewise threshold probing of a matched filter response [J].
Hoover, A ;
Kouznetsova, V ;
Goldbaum, M .
IEEE TRANSACTIONS ON MEDICAL IMAGING, 2000, 19 (03) :203-210
[19]  
Hu Hu J. J., PROC CVPR IEEE PROC CVPR IEEE, P7132
[20]  
Huang P., 2016, Proceedings of the First Conference on Machine Translation (WMT 2016), V2, P639, DOI DOI 10.18653/V1/W16-2360