Few-Shot Learning for Medical Image Segmentation Using 3D U-Net and Model-Agnostic Meta-Learning (MAML)

被引:6
作者
Alsaleh, Aqilah M. [1 ,2 ]
Albalawi, Eid [1 ]
Algosaibi, Abdulelah [1 ]
Albakheet, Salman S. [3 ]
Khan, Surbhi Bhatia [4 ,5 ]
机构
[1] King Faisal Univ, Coll Comp Sci & Informat Technol, AlAhsa 40031982, Saudi Arabia
[2] AlAhsa Hlth Cluster, Dept Informat Technol, AlAhsa 315836421, Saudi Arabia
[3] King Faisal Gen Hosp, Dept Radiol, AlAhsa 36361, Saudi Arabia
[4] Univ Salford, Sch Sci Engn & Environm, Dept Data Sci, Manchester M5 4WT, England
[5] Lebanese Amer Univ, Dept Elect & Comp Engn, POB 13-5053, Byblos, Lebanon
关键词
few-shot learning; MAML; medical image segmentation; meta-learning; U-Net;
D O I
10.3390/diagnostics14121213
中图分类号
R5 [内科学];
学科分类号
1002 ; 100201 ;
摘要
Deep learning has attained state-of-the-art results in general image segmentation problems; however, it requires a substantial number of annotated images to achieve the desired outcomes. In the medical field, the availability of annotated images is often limited. To address this challenge, few-shot learning techniques have been successfully adapted to rapidly generalize to new tasks with only a few samples, leveraging prior knowledge. In this paper, we employ a gradient-based method known as Model-Agnostic Meta-Learning (MAML) for medical image segmentation. MAML is a meta-learning algorithm that quickly adapts to new tasks by updating a model's parameters based on a limited set of training samples. Additionally, we use an enhanced 3D U-Net as the foundational network for our models. The enhanced 3D U-Net is a convolutional neural network specifically designed for medical image segmentation. We evaluate our approach on the TotalSegmentator dataset, considering a few annotated images for four tasks: liver, spleen, right kidney, and left kidney. The results demonstrate that our approach facilitates rapid adaptation to new tasks using only a few annotated images. In 10-shot settings, our approach achieved mean dice coefficients of 93.70%, 85.98%, 81.20%, and 89.58% for liver, spleen, right kidney, and left kidney segmentation, respectively. In five-shot sittings, the approach attained mean Dice coefficients of 90.27%, 83.89%, 77.53%, and 87.01% for liver, spleen, right kidney, and left kidney segmentation, respectively. Finally, we assess the effectiveness of our proposed approach on a dataset collected from a local hospital. Employing five-shot sittings, we achieve mean Dice coefficients of 90.62%, 79.86%, 79.87%, and 78.21% for liver, spleen, right kidney, and left kidney segmentation, respectively.
引用
收藏
页数:25
相关论文
共 67 条
[1]   A review on the use of deep learning for medical images segmentation [J].
Aljabri, Manar ;
AlGhamdi, Manal .
NEUROCOMPUTING, 2022, 506 :311-335
[2]  
Azad Reza., 2022, arXiv
[3]   Image Analysis for MRI Based Brain Tumor Detection and Feature Extraction Using Biologically Inspired BWT and SVM [J].
Bahadure N.B. ;
Ray A.K. ;
Thethi H.P. .
International Journal of Biomedical Imaging, 2017, 2017
[4]   A robust NIfTI image authentication framework to ensure reliable and safe diagnosis [J].
Basheer, Shakila ;
Singh, Kamred Udham ;
Sharma, Vandana ;
Bhatia, Surbhi ;
Pande, Nilesh ;
Kumar, Ankit .
PEERJ COMPUTER SCIENCE, 2023, 9
[5]  
Bjorck N., 2018, P 32 C NEUR INF PROC
[6]   Meta-Seg: A Generalized Meta-Learning Framework for Multi-Class Few-Shot Semantic Segmentation [J].
Cao, Zhiying ;
Zhang, Tengfei ;
Diao, Wenhui ;
Zhang, Yue ;
Lyu, Xiaode ;
Fu, Kun ;
Sun, Xian .
IEEE ACCESS, 2019, 7 :166109-166121
[7]  
Cardoso M J., 2022, arXiv, DOI [DOI 10.48550/ARXIV.2211.02701, 10.48550/arXiv.2211.02701]
[8]   Unsupervised Bidirectional Cross-Modality Adaptation via Deeply Synergistic Image and Feature Alignment for Medical Image Segmentation [J].
Chen, Cheng ;
Dou, Qi ;
Chen, Hao ;
Qin, Jing ;
Heng, Pheng Ann .
IEEE TRANSACTIONS ON MEDICAL IMAGING, 2020, 39 (07) :2494-2505
[9]  
Chen XC, 2021, Arxiv, DOI arXiv:2101.04793
[10]  
Cicek Ozgun, 2016, Medical Image Computing and Computer-Assisted Intervention - MICCAI 2016. 19th International Conference. Proceedings: LNCS 9901, P424, DOI 10.1007/978-3-319-46723-8_49