CSU-Net: A Context Spatial U-Net for Accurate Blood Vessel Segmentation in Fundus Images

被引:78
作者
Wang, Bo [2 ]
Wang, Shengpei [1 ,2 ]
Qiu, Shuang [2 ]
Wei, Wei [1 ,2 ]
Wang, Haibao [1 ,2 ]
He, Huiguang [1 ,2 ,3 ]
机构
[1] Univ Chinese Acad Sci UCAS, Sch Artificial Intelligence, Beijing 100049, Peoples R China
[2] Chinese Acad Sci CASIA, Inst Automat, Res Ctr Brain Inspired Intelligence, Natl Lab Pattern Recognit, Beijing 100190, Peoples R China
[3] Chinese Acad Sci, Ctr Excellence Brain Sci & Intelligence Techol, Beijing 100190, Peoples R China
基金
中国国家自然科学基金;
关键词
Image segmentation; Feature extraction; Biomedical imaging; Blood vessels; Machine learning; Task analysis; Fundus images; blood vessel segmentation; CSU-Net; feature fusion; structure loss; RETINAL IMAGES; MATCHED-FILTER; NETWORK;
D O I
10.1109/JBHI.2020.3011178
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Blood vessel segmentation in fundus images is a critical procedure in the diagnosis of ophthalmic diseases. Recent deep learning methods achieve high accuracy in vessel segmentation but still face the challenge to segment the microvascular and detect the vessel boundary. This is due to the fact that common Convolutional Neural Networks (CNN) are unable to preserve rich spatial information and a large receptive field simultaneously. Besides, CNN models for vessel segmentation usually are trained by equal pixel level cross-entropy loss, which tend to miss fine vessel structures. In this paper, we propose a novel Context Spatial U-Net (CSU-Net) for blood vessel segmentation. Compared with the other U-Net based models, we design a two-channel encoder: a context channel with multi-scale convolution to capture more receptive field and a spatial channel with large kernel to retain spatial information. Also, to combine and strengthen the features extracted from two paths, we introduce a feature fusion module (FFM) and an attention skip module (ASM). Furthermore, we propose a structure loss, which adds a spatial weight to cross-entropy loss and guide the network to focus more on the thin vessels and boundaries. We evaluated this model on three public datasets: DRIVE, CHASE-DB1 and STARE. The results show that the CSU-Net achieves higher segmentation accuracy than the current state-of-the-art methods.
引用
收藏
页码:1128 / 1138
页数:11
相关论文
共 45 条
[1]   An Active Contour Model for Segmenting and Measuring Retinal Vessels [J].
Al-Diri, Bashir ;
Hunter, Andrew ;
Steel, David .
IEEE TRANSACTIONS ON MEDICAL IMAGING, 2009, 28 (09) :1488-1497
[2]   An improved matched filter for blood vessel detection of digital retinal images [J].
Al-Rawi, Mohammed ;
Qutaishat, Munib ;
Arrar, Mohammed .
COMPUTERS IN BIOLOGY AND MEDICINE, 2007, 37 (02) :262-267
[3]   Recurrent residual U-Net for medical image segmentation [J].
Alom, Md Zahangir ;
Yakopcic, Chris ;
Hasan, Mahmudul ;
Taha, Tarek M. ;
Asari, Vijayan K. .
JOURNAL OF MEDICAL IMAGING, 2019, 6 (01)
[4]  
[Anonymous], 2011, Clinical ophthalmology: A systematic approach
[5]  
[Anonymous], 2018, PREPRINTS
[6]  
[Anonymous], 2017, RETINAL VESSEL SEGME
[7]   A new supervised retinal vessel segmentation method based on robust hybrid features [J].
Aslani, Shahab ;
Sarnel, Haldun .
BIOMEDICAL SIGNAL PROCESSING AND CONTROL, 2016, 30 :1-12
[8]   Trainable COSFIRE filters for vessel delineation with application to retinal images [J].
Azzopardi, George ;
Strisciuglio, Nicola ;
Vento, Mario ;
Petkov, Nicolai .
MEDICAL IMAGE ANALYSIS, 2015, 19 (01) :46-57
[9]   DETECTION OF BLOOD-VESSELS IN RETINAL IMAGES USING TWO-DIMENSIONAL MATCHED-FILTERS [J].
CHAUDHURI, S ;
CHATTERJEE, S ;
KATZ, N ;
NELSON, M ;
GOLDBAUM, M .
IEEE TRANSACTIONS ON MEDICAL IMAGING, 1989, 8 (03) :263-269
[10]   DeepLab: Semantic Image Segmentation with Deep Convolutional Nets, Atrous Convolution, and Fully Connected CRFs [J].
Chen, Liang-Chieh ;
Papandreou, George ;
Kokkinos, Iasonas ;
Murphy, Kevin ;
Yuille, Alan L. .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2018, 40 (04) :834-848