Res2-UNeXt: a novel deep learning framework for few-shot cell image segmentation

被引:28
作者
Chan, Sixian [1 ]
Huang, Cheng [1 ]
Bai, Cong [1 ]
Ding, Weilong [1 ]
Chen, Shengyong [1 ]
机构
[1] Zhejiang Univ Technol, Coll Comp Sci, 288 Rd LiuHe, Hangzhou 310023, Zhejiang, Peoples R China
基金
中国国家自然科学基金;
关键词
Segmentation of medical images; Data augmentation; Deep learning; Image processing; CONVOLUTIONAL NEURAL-NETWORKS; BRAIN-TUMOR SEGMENTATION;
D O I
10.1007/s11042-021-10536-5
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Recently, developing more accurate and more efficient deep learning algorithms for medical images segmentation attracts more and more attentions of researchers. Most of methods increase the depth of the network to replace with acquiring multi-information. The costs of training images annotation are too expensive to label by hand. In this paper, we propose a multi-scale and better performance deep architecture for medical image segmentation, named Res2-UNeXt. Our architecture is an encoder-decoder network with Res2XBlocks. The Res2XBlocks aim at acquiring multi-scale information better in images. To cooperate with Res2-UNeXt, we put forward a simple and efficient method of data augmentation. The data augmentation method, based on the process of cell movement and deformation, has biological implications in away. We evaluate Res2-UNeXt in comparison with recent variants of U-Net: UNet++, CE-Net and LadderNet and the method that different from U-Net architecture: FCN and DFANet on the dataset of ISBI cell tracking challenge 2019 via four different cell images. The experimental results demonstrate that Res2-UNeXt can achieve better performance than both recent variants of U-Net and non-U-Net architecture methods. Besides, the proposed architecture and the data augmentation method have been proven efficiently by the ablation experiments.
引用
收藏
页码:13275 / 13288
页数:14
相关论文
共 45 条
[1]   Validity of the Taylor hypothesis in a random spatially smooth flow [J].
Burghelea, T ;
Segre, E ;
Steinberg, V .
PHYSICS OF FLUIDS, 2005, 17 (10)
[2]   One-Shot Video Object Segmentation [J].
Caelles, S. ;
Maninis, K. -K. ;
Pont-Tuset, J. ;
Leal-Taixe, L. ;
Cremers, D. ;
Van Gool, L. .
30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, :5320-5329
[3]   DeepLab: Semantic Image Segmentation with Deep Convolutional Nets, Atrous Convolution, and Fully Connected CRFs [J].
Chen, Liang-Chieh ;
Papandreou, George ;
Kokkinos, Iasonas ;
Murphy, Kevin ;
Yuille, Alan L. .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2018, 40 (04) :834-848
[4]   AutoAugment: Learning Augmentation Strategies from Data [J].
Cubuk, Ekin D. ;
Zoph, Barret ;
Mane, Dandelion ;
Vasudevan, Vijay ;
Le, Quoc V. .
2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, :113-123
[5]   HyperDense-Net: A Hyper-Densely Connected CNN for Multi-Modal Image Segmentation [J].
Dolz, Jose ;
Gopinath, Karthik ;
Yuan, Jing ;
Lombaert, Herve ;
Desrosiers, Christian ;
Ben Ayed, Ismail .
IEEE TRANSACTIONS ON MEDICAL IMAGING, 2019, 38 (05) :1116-1126
[6]  
Dong N., 2018, BMVC, P79
[7]   Discriminative Unsupervised Feature Learning with Exemplar Convolutional Neural Networks [J].
Dosovitskiy, Alexey ;
Fischer, Philipp ;
Springenberg, Jost Tobias ;
Riedmiller, Martin ;
Brox, Thomas .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2016, 38 (09) :1734-1747
[8]   Three-dimensional computed tomographic reconstruction using a C-arm mounted XRII: Image-based correction of gantry motion nonidealities [J].
Fahrig, R ;
Holdsworth, DW .
MEDICAL PHYSICS, 2000, 27 (01) :30-38
[9]   A Point Set Generation Network for 3D Object Reconstruction from a Single Image [J].
Fan, Haoqiang ;
Su, Hao ;
Guibas, Leonidas .
30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, :2463-2471
[10]   Res2Net: A New Multi-Scale Backbone Architecture [J].
Gao, Shang-Hua ;
Cheng, Ming-Ming ;
Zhao, Kai ;
Zhang, Xin-Yu ;
Yang, Ming-Hsuan ;
Torr, Philip .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2021, 43 (02) :652-662