Self-supervised few-shot medical image segmentation with spatial transformations

被引:0
作者
Titoriya, Ankit Kumar [1 ]
Singh, Maheshwari Prasad [1 ]
Singh, Amit Kumar [1 ]
机构
[1] Engineering, National Institute of Technology Patna, Ashok Rajpath, Bihar, Patna
关键词
Few-shot learning; Few-shot segmentation; Image segmentation; Machine learning; Medical image; Self-supervised learning;
D O I
10.1007/s00521-024-10184-4
中图分类号
学科分类号
摘要
Deep learning-based segmentation models often struggle to achieve optimal performance when encountering new, unseen semantic classes. Their effectiveness hinges on vast amounts of annotated data and high computational resources for training. However, a promising solution to mitigate these challenges is the adoption of few-shot segmentation (FSS) networks, which can train models with reduced annotated data. The inherent complexity of medical images limits the applicability of FSS in medical imaging, despite its potential. Recent advancements in self-supervised label-efficient FSS models have demonstrated remarkable efficacy in medical image segmentation tasks. This paper presents a novel FSS architecture that enhances segmentation accuracy by utilising fewer features than existing methodologies. Additionally, this paper proposes a novel self-supervised learning approach that utilises supervoxel and augmented superpixel images to further enhance segmentation accuracy. This paper assesses the efficacy of the proposed model on two different datasets: abdominal magnetic resonance imaging (MRI) and cardiac MRI. The proposed model achieves a mean dice score and mean intersection over union of 81.62% and 70.38% for abdominal images, and 79.38% and 65.23% for cardiac images. © The Author(s), under exclusive licence to Springer-Verlag London Ltd., part of Springer Nature 2024.
引用
收藏
页码:18675 / 18691
页数:16
相关论文
共 62 条
[1]  
Ouyang C., Biffi C., Chen C., Kart T., Qiu H., Rueckert D., Self-supervised learning for few-shot medical image segmentation, IEEE Trans Med Imaging, 41, 7, pp. 1837-1848, (2022)
[2]  
Roy A.G., Siddiqui S., Polsterl S., Navab N., Wachinger C., ‘Squeeze & excite’guided few-shot segmentation of volumetric images, Med Image Anal, 59, (2020)
[3]  
Zhang X., Wei Y., Yang Y., Huang T.S., Sg-one: similarity guidance network for one-shot semantic segmentation, IEEE Trans Cybern, 50, 9, pp. 3855-3865, (2020)
[4]  
Zhu J., Li Y., Hu Y., Ma K., Zhou S.K., Zheng Y., Rubik’s cube+: a self-supervised feature learning framework for 3d medical image analysis, Med Image Anal, 64, (2020)
[5]  
Lu Q., Li Y., Ye C., Volumetric white matter tract segmentation with nested self-supervised learning using sequential pretext tasks, Med Image Anal, 72, (2021)
[6]  
Huang Q., Huang Y., Luo Y., Yuan F., Li X., Segmentation of breast ultrasound image with semantic classification of superpixels, Med Image Anal, 61, (2020)
[7]  
Irving B., Franklin J.M., Papiez B.W., Anderson E.M., Sharma R.A., Gleeson F.V., Schnabel J.A., Pieces-of-parts for supervoxel segmentation with global context: application to DCE-MRI tumour delineation, Med Image Anal, 32, pp. 69-83, (2016)
[8]  
Stutz D., Hermans A., Leibe B., Superpixels: an evaluation of the state-of-the-art, Comput Vis Image Underst, 166, pp. 1-27, (2018)
[9]  
Chen L., Bentley P., Mori K., Misawa K., Fujiwara M., Rueckert D., Self-supervised learning for medical image analysis using image context restoration, Med Image Anal, 58, (2019)
[10]  
Achanta R., Shaji A., Smith K., Lucchi A., Fua P., Susstrunk S., SLIC superpixels compared to state-of-the-art superpixel methods, IEEE Trans Pattern Anal Mach Intell, 34, 11, pp. 2274-2282, (2012)