NuClick: A deep learning framework for interactive segmentation of microscopic images

被引:105
作者
Koohbanani, Navid Alemi [1 ,2 ]
Jahanifar, Mostafa [3 ]
Tajadin, Neda Zamani [4 ]
Rajpoot, Nasir [1 ,2 ]
机构
[1] Univ Warwick, Dept Comp Sci, Warwick, England
[2] Alan Turing Inst, London, England
[3] NRP Co, Dept Res & Dev, Tehran, Iran
[4] Tarbiat Modares Univ, Dept Elect Engn, Tehran, Iran
基金
英国医学研究理事会;
关键词
Annotation; Interactive segmentation; Nuclear segmentation; Cell segmentation; Gland segmentation; Computational pathology; Deep learning; VIDEO SEGMENTATION;
D O I
10.1016/j.media.2020.101771
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Object segmentation is an important step in the workflow of computational pathology. Deep learning based models generally require large amount of labeled data for precise and reliable prediction. However, collecting labeled data is expensive because it often requires expert knowledge, particularly in medical imaging domain where labels are the result of a time-consuming analysis made by one or more human experts. As nuclei, cells and glands are fundamental objects for downstream analysis in computational pathology/cytology, in this paper we propose NuClick, a CNN-based approach to speed up collecting annotations for these objects requiring minimum interaction from the annotator. We show that for nuclei and cells in histology and cytology images, one click inside each object is enough for NuClick to yield a precise annotation. For multicellular structures such as glands, we propose a novel approach to provide the NuClick with a squiggle as a guiding signal, enabling it to segment the glandular boundaries. These supervisory signals are fed to the network as auxiliary inputs along with RGB channels. With detailed experiments, we show that NuClick is applicable to a wide range of object scales, robust against variations in the user input, adaptable to new domains, and delivers reliable annotations. An instance segmentation model trained on masks generated by NuClick achieved the first rank in LYON19 challenge. As exemplar outputs of our framework, we are releasing two datasets: 1) a dataset of lymphocyte annotations within IHC images, and 2) a dataset of segmented WBCs in blood smear images. (c) 2020 Elsevier B.V. All rights reserved.
引用
收藏
页数:14
相关论文
共 80 条
[1]   Devil is in the Edges: Learning Semantic Boundaries from Noisy Annotations [J].
Acuna, David ;
Kar, Amlan ;
Fidler, Sanja .
2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, :11067-11075
[2]   SEEDED REGION GROWING [J].
ADAMS, R ;
BISCHOF, L .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 1994, 16 (06) :641-647
[3]   Interactive Full Image Segmentation by Considering All Regions Jointly [J].
Agustsson, Eirikur ;
Uijlings, Jasper R. R. ;
Ferrari, Vittorio .
2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, :11614-11623
[4]   Semantic Object Selection [J].
Ahmed, Ejaz ;
Cohen, Scott ;
Price, Brian .
2014 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2014, :3150-3157
[5]   Fluid Annotation: A Human-Machine Collaboration Interface for Full Image Annotation [J].
Andriluka, Mykhaylo ;
Uijlings, Jasper R. R. ;
Ferrari, Vittorio .
PROCEEDINGS OF THE 2018 ACM MULTIMEDIA CONFERENCE (MM'18), 2018, :1957-1966
[6]  
[Anonymous], 2010, PROC ASIAN C COMPUT
[7]  
[Anonymous], 2017, BMVC
[8]  
[Anonymous], 2005, proc. of Graphicon, DOI DOI 10.1016/J.AJ0D0.2004.07.036
[9]  
[Anonymous], 2014, arXiv
[10]   Glandular Morphometrics for Objective Grading of Colorectal Adenocarcinoma Histology Images [J].
Awan, Ruqayya ;
Sirinukunwattana, Korsuk ;
Epstein, David ;
Jefferyes, Samuel ;
Qidwai, Uvais ;
Aftab, Zia ;
Mujeeb, Imaad ;
Snead, David ;
Rajpoot, Nasir .
SCIENTIFIC REPORTS, 2017, 7 :2220-2243