2D medical image segmentation via learning multi-scale contextual dependencies

被引:5
作者
Pang, Shuchao [1 ]
Du, Anan [2 ]
Yu, Zhenmei [3 ]
Orgun, Mehmet A. [1 ,4 ]
机构
[1] Macquarie Univ, Dept Comp, N Ryde, NSW 2109, Australia
[2] Univ Technol Sydney, Sch Elect & Data Engn, Ultimo, NSW 2007, Australia
[3] Shandong Womens Univ, Sch Data & Comp Sci, Jinan 250014, Peoples R China
[4] Macau Univ Sci & Technol, Fac Informat Technol, Ave Wai Long, Taipa 999078, Peoples R China
基金
中国国家自然科学基金;
关键词
Medical image segmentation; Contextual dependency; Hepatic tumors; COVID-19 lung infection; Retinal vessel; Visualization; CONVOLUTIONAL NEURAL-NETWORKS; VESSEL SEGMENTATION; MODEL; NET;
D O I
10.1016/j.ymeth.2021.05.015
中图分类号
Q5 [生物化学];
学科分类号
071010 ; 081704 ;
摘要
Automatic medical image segmentation plays an important role as a diagnostic aid in the identification of diseases and their treatment in clinical settings. Recently proposed methods based on Convolutional Neural Networks (CNNs) have demonstrated their potential in image processing tasks, including some medical image analysis tasks. Those methods can learn various feature representations with numerous weight-shared convolutional kernels, however, the missed diagnosis rate of regions of interest (ROIs) is still high in medical image segmentation. Two crucial factors behind this shortcoming, which have been overlooked, are small ROIs from medical images and the limited context information from the existing network models. In order to reduce the missed diagnosis rate of ROIs from medical images, we propose a new segmentation framework which enhances the representative capability of small ROIs (particularly in deep layers) and explicitly learns global contextual dependencies in multi-scale feature spaces. In particular, the local features and their global dependencies from each feature space are adaptively aggregated based on both the spatial and the channel dimensions. Moreover, some visualization comparisons of the learned features from our framework further boost neural networks' interpretability. Experimental results show that, in comparison to some popular medical image segmentation and general image segmentation methods, our proposed framework achieves the state-of-the-art performance on the liver tumor segmentation task with 91.18% Sensitivity, the COVID-19 lung infection segmentation task with 75.73% Sensitivity and the retinal vessel detection task with 82.68% Sensitivity. Moreover, it is possible to integrate (parts of) the proposed framework into most of the recently proposed Fully CNN-based models, in order to improve their effectiveness in medical image segmentation tasks.
引用
收藏
页码:40 / 53
页数:14
相关论文
共 58 条
[1]  
Alom M.Z., 2018, ARXIV PREPR ARXIV180
[2]   Recurrent residual U-Net for medical image segmentation [J].
Alom, Md Zahangir ;
Yakopcic, Chris ;
Hasan, Mahmudul ;
Taha, Tarek M. ;
Asari, Vijayan K. .
JOURNAL OF MEDICAL IMAGING, 2019, 6 (01)
[3]  
[Anonymous], 2018, LIVER CANCER
[4]  
[Anonymous], 2018, LIVER CANC STAT
[5]   Trainable COSFIRE filters for vessel delineation with application to retinal images [J].
Azzopardi, George ;
Strisciuglio, Nicola ;
Vento, Mario ;
Petkov, Nicolai .
MEDICAL IMAGE ANALYSIS, 2015, 19 (01) :46-57
[6]   Deep learning with non-medical training used for chest pathology identification [J].
Bar, Yaniv ;
Diamant, Idit ;
Wolf, Lior ;
Greenspan, Hayit .
MEDICAL IMAGING 2015: COMPUTER-AIDED DIAGNOSIS, 2015, 9414
[7]  
Bilic P., 2019, ARXIV PREPR ARXIV190
[8]   Cascaded deep convolutional encoder-decoder neural networks for efficient liver tumor segmentation [J].
Budak, Umit ;
Guo, Yanhui ;
Tanyildizi, Erkan ;
Sengur, Abdulkadir .
MEDICAL HYPOTHESES, 2020, 134
[9]  
Chen L.-C., 2014, ARXIV PREPR
[10]   CaMap: Camera-based Map Manipulation on Mobile Devices [J].
Chen, Liang ;
Chen, Dongyi .
PROCEEDINGS OF THE 2ND INTERNATIONAL CONFERENCE ON COMPUTER SCIENCE AND APPLICATION ENGINEERING (CSAE2018), 2018,