Multi-Crop Convolutional Neural Networks for Fast Lung Nodule Segmentation

被引:33
作者
Chen, Quan [1 ,2 ]
Xie, Wei [3 ]
Zhou, Pan [4 ]
Zheng, Chuansheng [1 ]
Wu, Dapeng [5 ]
机构
[1] Huazhong Univ Sci & Technol, Dept Radiol, Union Hosp, Tongji Med Coll, Wuhan 430022, Peoples R China
[2] Hubei Prov Key Lab Mol Imaging, Wuhan 430022, Peoples R China
[3] Huazhong Univ Sci & Technol, Sch Elect Informat & Commun, Wuhan 430074, Peoples R China
[4] Huazhong Univ Sci & Technol, Hubei Engn Res Ctr Big Data Secur, Sch Cyber Sci & Engn, Wuhan 430074, Peoples R China
[5] Univ Florida, Dept Elect & Comp Engn, Gainesville, FL 32611 USA
来源
IEEE TRANSACTIONS ON EMERGING TOPICS IN COMPUTATIONAL INTELLIGENCE | 2022年 / 6卷 / 05期
基金
中国国家自然科学基金;
关键词
Lung; Three-dimensional displays; Feature extraction; Image segmentation; Computed tomography; Two dimensional displays; Task analysis; Convolutional neural network; loss function; lung nodule segmentation; pooling layer; MR BRAIN IMAGES; PULMONARY NODULES; AUTOMATIC SEGMENTATION; CT SCANS;
D O I
10.1109/TETCI.2021.3051910
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Computed tomography (CT) images are formally taken as an assistance of early diagnosis in lung nodule analysis. Thus the accurate lung nodule segmentation is in great need for image-driven tasks. However, as heterogeneity exists between different types of lung nodules, the similar visual appearance between the pixels of nodules and pixels of non-nodule area make it difficult for automatic lung nodule segmentation. In this article, we propose a fast end-to-end framework, called Fast Multi-crop Guided Attention (FMGA) network, to accurately segment lung nodules in CT images. Our method utilizes multi-crop nodule slices as input to aggregate contextual information (2D context from current image slice and 3D context from adjacent axial slices), and exploits a global convolutional layer for nodule pixel embedding matching. To further make use of the information from border pixels near the nodule margin for better segmentation, we develop a weighted loss function to facilitate the model training by considering a balanced class samples of pixels around the nodule margin. Moreover, we utilize a central pooling layer to facilitate the contexts feature propagation in pixel neighbors. We evaluate our method on the largest public lung CT dataset LIDC and the collected lung CT data from Wuhan local hospital, respectively. Experimental results show that FMGA achieves superior performance among the state-of-the-arts. In addition, we give an ablation study and visualization results to illustrate how each component works for accurate lung nodule segmentation.
引用
收藏
页码:1190 / 1200
页数:11
相关论文
共 54 条
[1]   Decoding tumour phenotype by noninvasive imaging using a quantitative radiomics approach [J].
Aerts, Hugo J. W. L. ;
Velazquez, Emmanuel Rios ;
Leijenaar, Ralph T. H. ;
Parmar, Chintan ;
Grossmann, Patrick ;
Cavalho, Sara ;
Bussink, Johan ;
Monshouwer, Rene ;
Haibe-Kains, Benjamin ;
Rietveld, Derek ;
Hoebers, Frank ;
Rietbergen, Michelle M. ;
Leemans, C. Rene ;
Dekker, Andre ;
Quackenbush, John ;
Gillies, Robert J. ;
Lambin, Philippe .
NATURE COMMUNICATIONS, 2014, 5
[2]  
[Anonymous], [No title captured]
[3]   iW-Net: an automatic and minimalistic interactive lung nodule segmentation deep network [J].
Aresta, Guilherme ;
Jacobs, Colin ;
Araujo, Teresa ;
Cunha, Antonio ;
Ramos, Isabel ;
Ginneken, Bram van ;
Campilho, Aurelio .
SCIENTIFIC REPORTS, 2019, 9 (1)
[4]   The Lung Image Database Consortium, (LIDC) and Image Database Resource Initiative (IDRI): A Completed Reference Database of Lung Nodules on CT Scans [J].
Armato, Samuel G., III ;
McLennan, Geoffrey ;
Bidaut, Luc ;
McNitt-Gray, Michael F. ;
Meyer, Charles R. ;
Reeves, Anthony P. ;
Zhao, Binsheng ;
Aberle, Denise R. ;
Henschke, Claudia I. ;
Hoffman, Eric A. ;
Kazerooni, Ella A. ;
MacMahon, Heber ;
van Beek, Edwin J. R. ;
Yankelevitz, David ;
Biancardi, Alberto M. ;
Bland, Peyton H. ;
Brown, Matthew S. ;
Engelmann, Roger M. ;
Laderach, Gary E. ;
Max, Daniel ;
Pais, Richard C. ;
Qing, David P-Y ;
Roberts, Rachael Y. ;
Smith, Amanda R. ;
Starkey, Adam ;
Batra, Poonam ;
Caligiuri, Philip ;
Farooqi, Ali ;
Gladish, Gregory W. ;
Jude, C. Matilda ;
Munden, Reginald F. ;
Petkovska, Iva ;
Quint, Leslie E. ;
Schwartz, Lawrence H. ;
Sundaram, Baskaran ;
Dodd, Lori E. ;
Fenimore, Charles ;
Gur, David ;
Petrick, Nicholas ;
Freymann, John ;
Kirby, Justin ;
Hughes, Brian ;
Casteele, Alessi Vande ;
Gupte, Sangeeta ;
Sallam, Maha ;
Heath, Michael D. ;
Kuhn, Michael H. ;
Dharaiya, Ekta ;
Burns, Richard ;
Fryd, David S. .
MEDICAL PHYSICS, 2011, 38 (02) :915-931
[5]   SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation [J].
Badrinarayanan, Vijay ;
Kendall, Alex ;
Cipolla, Roberto .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2017, 39 (12) :2481-2495
[6]   An experimental comparison of min-cut/max-flow algorithms for energy minimization in vision [J].
Boykov, Y ;
Kolmogorov, V .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2004, 26 (09) :1124-1137
[7]   Active contours without edges [J].
Chan, TF ;
Vese, LA .
IEEE TRANSACTIONS ON IMAGE PROCESSING, 2001, 10 (02) :266-277
[8]   DeepLab: Semantic Image Segmentation with Deep Convolutional Nets, Atrous Convolution, and Fully Connected CRFs [J].
Chen, Liang-Chieh ;
Papandreou, George ;
Kokkinos, Iasonas ;
Murphy, Kevin ;
Yuille, Alan L. .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2018, 40 (04) :834-848
[9]  
Cicek O, 2016, 3D U-Net: Learning dense volumetric segmentation from sparse annotation, DOI DOI 10.1007/978-3-319-46723-8_49
[10]  
Ciresan D., 2012, P ADV NEUR INF PROC, P2843, DOI DOI 10.5555/2999325.2999452