i3Deep: Efficient 3D interactive segmentation with the nnU-Net

被引:0
作者
Gotkowski, Karol [1 ,2 ]
Gonzalez, Camila [3 ]
Kaltenborn, Isabel [4 ]
Fischbach, Ricarda [4 ]
Bucher, Andreas [4 ]
Mukhopadhyay, Anirban [3 ]
机构
[1] Helmholtz Imaging, Appl Comp Vis Lab, Berlin, Germany
[2] German Canc Res Ctr, Div Med Image Comp, Heidelberg, Germany
[3] Tech Univ Darmstadt, Karolinenpl 5, D-64289 Darmstadt, Germany
[4] Univ Hosp Frankfurt, Inst Diagnost & Intervent Radiol, Theodor Stern Kai 7, D-60590 Frankfurt, Germany
来源
INTERNATIONAL CONFERENCE ON MEDICAL IMAGING WITH DEEP LEARNING, VOL 172 | 2022年 / 172卷
关键词
interactive segmentation; nnU-Net; uncertainty; out-of-distribution; MEDICAL IMAGE SEGMENTATION;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
3D interactive segmentation is highly relevant in reducing the annotation time for experts. However, current methods often achieve only small segmentation improvements per interaction as lightweight models are a requirement to ensure near-realtime usage. Models with better predictive performance such as the nnU-Net cannot be employed for interactive segmentation due to their high computational demands, which result in long inference times. To solve this issue, we propose the 3D interactive segmentation framework i3Deep. Slices are selected through uncertainty estimation in an offline setting and afterwards corrected by an expert. The slices are then fed to a refinement nnU-Net, which significantly improves the global 3D segmentation from the local corrections. This approach bypasses the issue of long inference times by moving expensive computations into an offline setting that does not include the expert. For three different anatomies, our approach reduces the workload of the expert by 80.3%, while significantly improving the Dice by up to 39.5%, outperforming other state-of-the-art methods by a clear margin. Even on out-of-distribution data i3Deep is able to improve the segmentation by 19.3%.
引用
收藏
页码:441 / 456
页数:16
相关论文
共 32 条
  • [1] Antonelli Michela, 2021, The Medical Segmentation Decathlon, P1
  • [2] iW-Net: an automatic and minimalistic interactive lung nodule segmentation deep network
    Aresta, Guilherme
    Jacobs, Colin
    Araujo, Teresa
    Cunha, Antonio
    Ramos, Isabel
    Ginneken, Bram van
    Campilho, Aurelio
    [J]. SCIENTIFIC REPORTS, 2019, 9 (1)
  • [3] Iterative Interaction Training for Segmentation Editing Networks
    Bredell, Gustav
    Tanner, Christine
    Konukoglu, Ender
    [J]. MACHINE LEARNING IN MEDICAL IMAGING: 9TH INTERNATIONAL WORKSHOP, MLMI 2018, 2018, 11046 : 363 - 370
  • [4] Cheng Danni, 2017, A Point Says a Lot: An Interactive Segmentation Method for MR Prostate via One-Point Labeling, V10541, P106, DOI [10.1007/978-3-319-67389-9, DOI 10.1007/978-3-319-67389-9]
  • [5] Gal Y, 2016, PR MACH LEARN RES, V48
  • [6] Random walks for image segmentation
    Grady, Leo
    [J]. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2006, 28 (11) : 1768 - 1783
  • [7] EXACT MAXIMUM A-POSTERIORI ESTIMATION FOR BINARY IMAGES
    GREIG, DM
    PORTEOUS, BT
    SEHEULT, AH
    [J]. JOURNAL OF THE ROYAL STATISTICAL SOCIETY SERIES B-METHODOLOGICAL, 1989, 51 (02): : 271 - 279
  • [8] Guo CA, 2017, PR MACH LEARN RES, V70
  • [9] nnU-Net: a self-configuring method for deep learning-based biomedical image segmentation
    Isensee, Fabian
    Jaeger, Paul F.
    Kohl, Simon A. A.
    Petersen, Jens
    Maier-Hein, Klaus H.
    [J]. NATURE METHODS, 2021, 18 (02) : 203 - +
  • [10] Jirik, Mjirik/imcut: 3d graph cut segmentation, P1