Liver segmentation in abdominal CT images via auto-context neural network and self-supervised contour attention*

被引:19
作者
Chung, Minyoung [1 ]
Lee, Jingyu [2 ]
Park, Sanguk [2 ]
Lee, Chae Eun [2 ]
Lee, Jeongjin [3 ]
Shin, Yeong-Gil [2 ]
机构
[1] Soongsil Univ, Sch Software, 369 Sangdo Ro, Seoul 06978, South Korea
[2] Seoul Natl Univ, Dept Comp Sci & Engn, 1 Gwanak Ro, Seoul 08826, South Korea
[3] Soongsil Univ, Sch Comp Sci & Engn, 369 Sangdo Ro, Seoul 06978, South Korea
基金
新加坡国家研究基金会;
关键词
Auto-context neural network; Contour attention network; High-level residual shape prior; Liver segmentation; Self-supervised neural network; SPEED;
D O I
10.1016/j.artmed.2021.102023
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Objective: Accurate image segmentation of the liver is a challenging problem owing to its large shape variability and unclear boundaries. Although the applications of fully convolutional neural networks (CNNs) have shown groundbreaking results, limited studies have focused on the performance of generalization. In this study, we introduce a CNN for liver segmentation on abdominal computed tomography (CT) images that focus on the performance of generalization and accuracy. Methods: To improve the generalization performance, we initially propose an auto-context algorithm in a single CNN. The proposed auto-context neural network exploits an effective high-level residual estimation to obtain the shape prior. Identical dual paths are effectively trained to represent mutual complementary features for an accurate posterior analysis of a liver. Further, we extend our network by employing a self-supervised contour scheme. We trained sparse contour features by penalizing the ground-truth contour to focus more contour attentions on the failures. Results: We used 180 abdominal CT images for training and validation. Two-fold cross-validation is presented for a comparison with the state-of-the-art neural networks. The experimental results show that the proposed network results in better accuracy when compared to the state-of-the-art networks by reducing 10.31% of the Hausdorff distance. Novel multiple N-fold cross-validations are conducted to show the best performance of generalization of the proposed network. Conclusion and significance: The proposed method minimized the error between training and test images more than any other modern neural networks. Moreover, the contour scheme was successfully employed in the network by introducing a self-supervising metric.
引用
收藏
页数:12
相关论文
共 52 条
[1]  
[Anonymous], MEDICINE, V113
[2]  
[Anonymous], 2017, IEEE INT C COMPUT VI, DOI [10.1109/iccv.201, DOI 10.1109/ICCV.2017.322]
[3]  
Bray F, 2018, CA-CANCER J CLIN, V68, P394, DOI [10.3322/caac.21492, 10.3322/caac.21609]
[4]   Unsupervised Domain Adaptation With Adversarial Residual Transform Networks [J].
Cai, Guanyu ;
Wang, Yuqin ;
He, Lianghua ;
Zhou, MengChu .
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2020, 31 (08) :3073-3086
[5]  
CASELLES V, 1995, FIFTH INTERNATIONAL CONFERENCE ON COMPUTER VISION, PROCEEDINGS, P694, DOI 10.1109/ICCV.1995.466871
[6]   Active contours without edges [J].
Chan, TF ;
Vese, LA .
IEEE TRANSACTIONS ON IMAGE PROCESSING, 2001, 10 (02) :266-277
[7]   VoxResNet: Deep voxelwise residual networks for brain segmentation from 3D MR images [J].
Chen, Hao ;
Dou, Qi ;
Yu, Lequan ;
Qin, Jing ;
Heng, Pheng-Ann .
NEUROIMAGE, 2018, 170 :446-455
[8]   DCAN: Deep contour-aware networks for object instance segmentation from histology images [J].
Chen, Hao ;
Qi, Xiaojuan ;
Yu, Lequan ;
Dou, Qi ;
Qin, Jing ;
Heng, Pheng-Ann .
MEDICAL IMAGE ANALYSIS, 2017, 36 :135-146
[9]   Xception: Deep Learning with Depthwise Separable Convolutions [J].
Chollet, Francois .
30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, :1800-1807
[10]  
Chung M, 2020, COMPUT METHODS PROG, V192, P1